首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We describe the key features of the visual world paradigm and review the main research areas where it has been used. In our discussion we highlight that the paradigm provides information about the way language users integrate linguistic information with information derived from the visual environment. Therefore the paradigm is well suited to study one of the key issues of current cognitive psychology, namely the interplay between linguistic and visual information processing. However, conclusions about linguistic processing (e.g., about activation, competition, and timing of access of linguistic representations) in the absence of relevant visual information must be drawn with caution.  相似文献   

2.
3.
Previous masked priming research in word recognition has demonstrated that repetition priming is influenced by experiment-wise information structure, such as proportion of target repetition. Research using naturalistic tasks and eye-tracking has shown that people use linguistic knowledge to anticipate upcoming words. We examined whether the proportion of target repetition within an experiment can have a similar effect on anticipatory eye movements. We used a word-to-picture matching task (i.e., the visual world paradigm) with target repetition proportion carefully controlled. Participants' eye movements were tracked starting when the pictures appeared, one second prior to the onset of the target word. Targets repeated from the previous trial were fixated more than other items during this preview period when target repetition proportion was high and less than other items when target repetition proportion was low. These results indicate that linguistic anticipation can be driven by short-term within-experiment trial structure, with implications for the generalization of priming effects, the bases of anticipatory eye movements, and experiment design.  相似文献   

4.
The present study explored whether language-nonselective access in bilinguals occurs across word classes in a sentence context. Dutch–English bilinguals were auditorily presented with English (L2) sentences while looking at a visual world. The sentences contained interlingual homophones from distinct lexical categories (e.g., the English verb spoke, which overlaps phonologically with the Dutch noun for ghost, spook). Eye movement recordings showed that depictions of referents of the Dutch (L1) nouns attracted more visual attention than unrelated distractor pictures in sentences containing homophones. This finding shows that native language objects are activated during second language verb processing despite the structural information provided by the sentence context.  相似文献   

5.
We present a computational model of grasping of non-fixated (extrafoveal) target objects which is implemented on a robot setup, consisting of a robot arm with cameras and gripper. This model is based on the premotor theory of attention (Rizzolatti et al., 1994) which states that spatial attention is a consequence of the preparation of goal-directed, spatially coded movements (especially saccadic eye movements). In our model, we add the hypothesis that saccade planning is accompanied by the prediction of the retinal images after the saccade. The foveal region of these predicted images can be used to determine the orientation and shape of objects at the target location of the attention shift. This information is necessary for precise grasping. Our model consists of a saccade controller for target fixation, a visual forward model for the prediction of retinal images, and an arm controller which generates arm postures for grasping. We compare the precision of the robotic model in different task conditions, among them grasping (1) towards fixated target objects using the actual retinal images, (2) towards non-fixated target objects using visual prediction, and (3) towards non-fixated target objects without visual prediction. The first and second setting result in good grasping performance, while the third setting causes considerable errors of the gripper orientation, demonstrating that visual prediction might be an important component of eye–hand coordination. Finally, based on the present study we argue that the use of robots is a valuable research methodology within psychology.  相似文献   

6.
We investigated whether the deployment of attention in scenes is better explained by visual salience or by cognitive relevance. In two experiments, participants searched for target objects in scene photographs. The objects appeared in semantically appropriate locations but were not visually salient within their scenes. Search was fast and efficient, with participants much more likely to look to the targets than to the salient regions. This difference was apparent from the first fixation and held regardless of whether participants were familiar with the visual form of the search targets. In the majority of trials, salient regions were not fixated. The critical effects were observed for all 24 participants across the two experiments. We outline a cognitive relevance framework to account for the control of attention and fixation in scenes.  相似文献   

7.
A "follow-the-dot" method was used to investigate the visual memory systems supporting accumulation of object information in natural scenes. Participants fixated a series of objects in each scene, following a dot cue from object to object. Memory for the visual form of a target object was then tested. Object memory was consistently superior for the two most recently fixated objects, a recency advantage indicating a visual short-term memory component to scene representation. In addition, objects examined earlier were remembered at rates well above chance, with no evidence of further forgetting when 10 objects intervened between target examination and test and only modest forgetting with 402 intervening objects. This robust prerecency performance indicates a visual long-term memory component to scene representation.  相似文献   

8.
Researchers often conduct visual world studies to investigate how listeners integrate linguistic information with prior context. Such studies are likely to generate anticipatory baseline effects (ABEs), differences in listeners' expectations about what a speaker might mention that exist before a critical speech stimulus is presented. ABEs show that listeners have attended to and accessed prior contextual information in time to influence the processing of the critical speech stimulus. However, further evidence is required to show that the information actually did influence subsequent processing. ABEs can compromise the validity of inferences about information integration if they are not appropriately controlled. We discuss four solutions: statistical estimation, experimental control, elimination of “on-target” trials, and neutral gaze. An experiment compares the performance of these solutions, and suggests that the elimination of on-target trials introduces bias in the direction of ABEs, due to the statistical phenomenon of regression toward the mean. We conclude that statistical estimation, possibly coupled with experimental control, offers the most valid and least biased solution.  相似文献   

9.
An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.  相似文献   

10.
Huettig F  Altmann GT 《Cognition》2005,96(1):B23-B32
When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84-107; Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268(5217), 1632-1634]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word ‘piano’ when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word ‘piano’ unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.  相似文献   

11.
Humans often look at other people in natural scenes, and previous research has shown that these looks follow the conversation and that they are sensitive to sound in audiovisual speech perception. In the present experiment, participants viewed video clips of four people involved in a discussion. By removing the sound, we asked whether auditory information would affect when speakers were fixated, how fixations between different observers were synchronized, and whether the eyes or mouth were looked at most often. The results showed that sound changed the timing of looks—by alerting observers to changes in conversation and attracting attention to the speaker. Clips with sound also led to greater attentional synchrony, with more observers fixating the same regions at the same time. However, looks towards the eyes of the people continued to dominate and were unaffected by removing the sound. These findings provide a rich example of multimodal social attention.  相似文献   

12.
Three experiments tested the role of verbal versus visuo-spatial working memory in the comprehension of co-speech iconic gestures. In Experiment 1, participants viewed congruent discourse primes in which the speaker's gestures matched the information conveyed by his speech, and incongruent ones in which the semantic content of the speaker's gestures diverged from that in his speech. Discourse primes were followed by picture probes that participants judged as being either related or unrelated to the preceding clip. Performance on this picture probe classification task was faster and more accurate after congruent than incongruent discourse primes. The effect of discourse congruency on response times was linearly related to measures of visuo-spatial, but not verbal, working memory capacity, as participants with greater visuo-spatial WM capacity benefited more from congruent gestures. In Experiments 2 and 3, participants performed the same picture probe classification task under conditions of high and low loads on concurrent visuo-spatial (Experiment 2) and verbal (Experiment 3) memory tasks. Effects of discourse congruency and verbal WM load were additive, while effects of discourse congruency and visuo-spatial WM load were interactive. Results suggest that congruent co-speech gestures facilitate multi-modal language comprehension, and indicate an important role for visuo-spatial WM in these speech–gesture integration processes.  相似文献   

13.
Objects likely to appear in a given real-world scene are frequently found to be easier to recognize. Two different sources of contextual information have been proposed as the basis for this effect: global scene background and individual companion objects. The present paper examines the relative importance of these two elements in explaining the context-sensitivity of object identification in full scenes. Specific sequences of object fixations were elicited during free scene exploration, while fixation times on designated target objects were recorded as a measure of ease of target identification. Episodic consistency between the target, the global scene background, and the object fixated just prior to the target (the prime), were manipulated orthogonally. Target fixation times were examined for effects of prime and background. Analyses show effects of both factors, which are modulated by the chronology and spatial extent of scene exploration. The results are discussed in terms of their implications for a model of visual object recognition in the context of real-world scenes.  相似文献   

14.
In the visual world paradigm as used in psycholinguistics, eye gaze (i.e. visual orienting) is measured in order to draw conclusions about linguistic processing. However, current theories are underspecified with respect to how visual attention is guided on the basis of linguistic representations. In the visual search paradigm as used within the area of visual attention research, investigators have become more and more interested in how visual orienting is affected by higher order representations, such as those involved in memory and language. Within this area more specific models of orienting on the basis of visual information exist, but they need to be extended with mechanisms that allow for language-mediated orienting. In the present paper we review the evidence from these two different – but highly related – research areas. We arrive at a model in which working memory serves as the nexus in which long-term visual as well as linguistic representations (i.e. types) are bound to specific locations (i.e. tokens or indices). The model predicts that the interaction between language and visual attention is subject to a number of conditions, such as the presence of the guiding representation in working memory, capacity limitations, and cognitive control mechanisms.  相似文献   

15.
In prior work, women were found to outperform men on short-term verbal memory tasks. The goal of the present work was to examine whether gender differences on short-term memory tasks are tied to the involvement of long-term memory in the learning process. In Experiment 1, men and women were compared on their ability to remember phonologically-familiar novel words and phonologically-unfamiliar novel words. Learning of phonologically-familiar novel words (but not of phonologically-unfamiliar novel words) can be supported by long-term phonological knowledge. Results revealed that women outperformed men on phonologically-familiar novel words, but not on phonologically-unfamiliar novel words. In Experiment 2, we replicated Experiment 1 using a within-subjects design, and confirmed gender differences on phonologically-familiar, but not on phonologically-unfamiliar stimuli. These findings are interpreted to suggest that women are more likely than men to recruit native-language phonological knowledge during novel word-learning.  相似文献   

16.
17.
日常生活中, 语言的使用往往出现在某个视觉情境里。大量认知科学研究表明, 视觉信息与语言信息加工模块并不是独立工作, 而是存在复杂的交互作用。本文以视觉信息对语言加工的影响为主线, 首先对视觉信息影响言语理解, 言语产生以及言语交流的相关研究进展进行了综述。其次, 重点对视觉信息影响语言加工的机制进行了探讨。最后介绍了关于视觉信息影响语言加工的计算模型, 并对未来的研究方向提出了展望。  相似文献   

18.

This study presents a new research paradigm designed to explore the effect of anxiety on semantic information processing. It is based on the premise that the demonstrated effect of anxiety on cognitive performance and apparent inconsistencies reported in the literature might be better understood in terms of linguistic properties of inner speech which underlies analytic (vs. intuitive) thought processes. The study employed several parameters of functional linguistics in order to analyse properties of public speech by high- and low-anxious individuals. Results indicate that anxiety is associated with greater use of associative clauses that take the speaker further away from the original starting point before coming back and concluding (identified as reduced semantic efficiency). This is accompanied by a speech pattern that includes greater amounts of factual information unaccompanied by elaborate argumentation. While these results are considered tentative due to methodological and empirical shortcomings, they suggest the viability of this approach.  相似文献   

19.
We examined the prioritization of abruptly appearing and disappearing objects in real-world scenes. These scene changes occurred either during a fixation (transient appearance/disappearance) or during a saccade (nontransient appearance/disappearance). Prioritization was measured by the eyes’ propensity to be directed to the region of the scene change. Object additions and deletions were fixated at rates greater than chance, suggesting that both types of scene change are cues used by the visual system to guide attention during scene exploration, although appearances were fixated twice as often as disappearances, indicating that new objects are more salient than deleted objects. New and deleted objects were prioritized sooner and more frequently if they occurred during a fixation, as compared with during a saccade, indicating an important role of the transient signal that often accompanies sudden changes in scenes. New objects were prioritized regardless of whether they appeared during a fixation or a saccade, whereas prioritization of a deleted object occurred only if (1) a transient signal was present or (2) the removal of the object revealed previously occluded objects.  相似文献   

20.
Recent studies have shown that the presentation of concurrent linguistic context can lead to highly efficient performance in a standard conjunction search task by the induction of an incremental search strategy (Spivey, Tyler, Eberhard, & Tanenhaus, 2001). However, these findings were obtained under anomalously slow speech rate conditions. Accordingly, in the present study, the effects of concurrent linguistic context on visual search performance were compared when speech was recorded at both a normal rate and a slow rate. The findings provided clear evidence that the visual search benefit afforded by concurrent linguistic context was contingent on speech rate, with normal speech producing a smaller benefit. Overall, these findings have important implications for understanding how linguistic and visual processes interact in real time and suggest a disparity in the temporal resolution of speech comprehension and visual search processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号