首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Previous masked priming research in word recognition has demonstrated that repetition priming is influenced by experiment-wise information structure, such as proportion of target repetition. Research using naturalistic tasks and eye-tracking has shown that people use linguistic knowledge to anticipate upcoming words. We examined whether the proportion of target repetition within an experiment can have a similar effect on anticipatory eye movements. We used a word-to-picture matching task (i.e., the visual world paradigm) with target repetition proportion carefully controlled. Participants' eye movements were tracked starting when the pictures appeared, one second prior to the onset of the target word. Targets repeated from the previous trial were fixated more than other items during this preview period when target repetition proportion was high and less than other items when target repetition proportion was low. These results indicate that linguistic anticipation can be driven by short-term within-experiment trial structure, with implications for the generalization of priming effects, the bases of anticipatory eye movements, and experiment design.  相似文献   

2.
Most of the nearly 300 indigenous American languages in North America are moribund, including Ojibwe and Dakota. Despite numerous basic studies of stimulus equivalence, only a small handful of applied studies have demonstrated that a stimulus equivalence paradigm can be an effective and efficient means of teaching several concepts including math, spelling, and a second language. This study was designed to apply a stimulus equivalence paradigm involving match-to-sample procedures to teaching numbers and words in a second, endangered, language. A pretest–posttest randomized group design was used to examine the effectiveness of a stimulus equivalence computer program for teaching unknown Ojibwe and Dakota words to pre-kindergarteners. All of the participants who received the computer training demonstrated the development of equivalence classes that included numerals, spoken English words, and written words in Ojibwe and Dakota. Results also suggested that the stimulus equivalence paradigm may be an efficient way to teach words in a second language and to aid in language revitalization efforts.  相似文献   

3.
The anticipation of the forthcoming behaviour of social interaction partners is a useful ability supporting interaction and communication between social partners. Associations and prediction based on the production system (in line with views that listeners use the production system covertly to anticipate what the other person might be likely to say) are two potential factors, which have been proposed to be involved in anticipatory language processing. We examined the influence of both factors on the degree to which listeners predict upcoming linguistic input. Are listeners more likely to predict book as an appropriate continuation of the sentence “The boy reads a”, based on the strength of the association between the words read and book (strong association) and read and letter (weak association)? Do more proficient producers predict more? What is the interplay of these two influences on prediction? The results suggest that associations influence language-mediated anticipatory eye gaze in two-year-olds and adults only when two thematically appropriate target objects compete for overt attention but not when these objects are presented separately. Furthermore, children's prediction abilities are strongly related to their language production skills when appropriate target objects are presented separately but not when presented together. Both influences on prediction in language processing thus appear to be context dependent. We conclude that multiple factors simultaneously influence listeners’ anticipation of upcoming linguistic input and that only such a dynamic approach to prediction can capture listeners’ prowess at predictive language processing.  相似文献   

4.
Individual differences in children's online language processing were explored by monitoring their eye movements to objects in a visual scene as they listened to spoken sentences. Eleven skilled and 11 less-skilled comprehenders were presented with sentences containing verbs that were either neutral with respect to the visual context (e.g., Jane watched her mother choose the cake, where all of the objects in the scene were choosable) or supportive (e.g., Jane watched her mother eat the cake, where the cake was the only edible object). On hearing the supportive verb, the children made fast anticipatory eye movements to the target object (e.g., the cake), suggesting that children extract information from the language they hear and use this to direct ongoing processing. Less-skilled comprehenders did not differ from controls in the speed of their anticipatory eye movements, suggesting normal sensitivity to linguistic constraints. However, less-skilled comprehenders made a greater number of fixations to target objects, and these fixations were of a duration shorter than those observed in the skilled comprehenders, especially in the supportive condition. This pattern of results is discussed in terms of possible processing limitations, including difficulties with memory, attention, or suppressing irrelevant information.  相似文献   

5.
In the visual world paradigm, participants are more likely to fixate a visual referent that has some semantic relationship with a heard word, than they are to fixate an unrelated referent [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language. A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6, 813-839]. Here, this method is used to examine the psychological validity of models of high-dimensional semantic space. The data strongly suggest that these corpus-based measures of word semantics predict fixation behavior in the visual world and provide further evidence that language-mediated eye movements to objects in the concurrent visual environment are driven by semantic similarity rather than all-or-none categorical knowledge. The data suggest that the visual world paradigm can, together with other methodologies, converge on the evidence that may help adjudicate between different theoretical accounts of the psychological semantics.  相似文献   

6.
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye‐movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip‐art scenes and object arrays, raising the possibility that anticipatory eye‐movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real‐world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real‐world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co‐presence of the scene, or whether memory representations can be utilized instead. The same real‐world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object‐based visual indices.  相似文献   

7.
When viewing a visual scene, eye movements are often language-mediated: people look at objects as those objects are named. Eye movements can even reflect predictive language processing, moving to an object before it is named. Children are also capable of making language-mediated eye movements, even predictive ones, and prediction may be involved in language learning. The present study explored whether eye movements are language-mediated in a more naturalistic task – shared storybook reading. Research has shown that children fixate illustrations during shared storybook reading, ignoring text. The present study used high-precision eye-tracking to replicate this finding. Further, prereader participants showed increased likelihood of fixating relevant storybook illustrations as words were read aloud, indicating that their eye movements were language mediated like the adult participants. Language-mediated eye movements to illustrations were reactive, not predictive, in both participant groups.  相似文献   

8.
A number of previous studies reported a phenomenon of syntactic priming with young children as evidence for cognitive representations required for processing syntactic structures. However, it remains unclear how syntactic priming reflects children's grammatical competence. The current study investigated structural priming of the Japanese passive structure with 5- and 6-year-old children in a visual-world setting. Our results showed a priming effect as anticipatory eye movements to an upcoming referent in these children but the effect was significantly stronger in magnitude in 6-year-olds than in 5-year-olds. Consistently, the responses to comprehension questions revealed that 6-year-olds produced a greater number of correct answers and more answers using the passive structure than 5-year-olds. We also tested adult participants who showed even stronger priming than the children. The results together revealed that language users with the greater linguistic competence with the passives exhibited stronger priming, demonstrating a tight relationship between the effect of priming and the development of grammatical competence. Furthermore, we found that the magnitude of the priming effect decreased over time. We interpret these results in the light of an error-based learning account. Our results also provided evidence for prehead as well as head-independent priming.  相似文献   

9.
When identifying spoken words, older listeners may have difficulty resolving lexical competition or may place a greater weight on factors like lexical frequency. To obtain information about age differences in the time course of spoken word recognition, young and older adults' eye movements were monitored as they followed spoken instructions to click on objects displayed on a computer screen. Older listeners were more likely than younger listeners to fixate high-frequency displayed phonological competitors. However, degradation of auditory quality in younger listeners does not reproduce this result. These data are most consistent with an increased role for lexical frequency with age.  相似文献   

10.
When listeners follow spoken instructions to manipulate real objects, their eye movements to the objects are closely time locked to the referring words. We review five experiments showing that this time-locked characteristic of eye movements provides a detailed profile of the processes that underlie real-time spoken language comprehension. Together, the first four experiments showed that listerners immediately integrated lexical, sublexical, and prosodic information in the spoken input with information from the visual context to reduce the set of referents to the intended one. The fifth experiment demonstrated that a visual referential context affected the initial structuring of the linguistic input, eliminating even strong syntactic preferences that result in clear garden paths when the referential context is introduced linguistically. We argue that context affected the earliest moments of language processing because it was highly accessible and relevant to the behavioral goals of the listener.We thank D. Ballard and M. Hayhoe for the use of their laboratory (National Resource Laboratory for the Study of Brain and Behavior). We also thank J. Pelz for his assistance in learning how to use the equipment and K. Kobashi for assisting in the data collection. Finally, we thank Janet Nicol and an anonymous reviewer for their comments and suggestions. The research was supported by NIH resource grant 1-P41-RR09283; NIH HD27206 (M.K.T.); an NSF graduate fellowship (M.J.S.-K.); and a Canadian SSHRC fellowship (J.C.S.).  相似文献   

11.
A growing number of researchers in the sentence processing community are using eye movements to address issues in spoken language comprehension. Experiments using this paradigm have shown that visually presented referential information, including properties of referents relevant to specific actions, influences even the earliest moments of syntactic processing. Methodological concerns about task-specific strategies and the linking hypothesis between eye movements and linguistic processing are identified and discussed. These concerns are addressed in a review of recent studies of spoken word recognition which introduce and evaluate a detailed linking hypothesis between eye movements and lexical access. The results provide evidence about the time course of lexical activation that resolves some important theoretical issues in spoken-word recognition. They also demonstrate that fixations are sensitive to properties of the normal language-processing system that cannot be attributed to task-specific strategies.  相似文献   

12.
Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with “real-world” tasks and research utilizing the visual-world paradigm are also briefly discussed.  相似文献   

13.
Huettig F  Altmann GT 《Cognition》2005,96(1):B23-B32
When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84-107; Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268(5217), 1632-1634]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word ‘piano’ when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word ‘piano’ unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.  相似文献   

14.
This study explores incremental processing in spoken word recognition in Russian 5- and 6-year-olds and adults using free-viewing eye-tracking. Participants viewed scenes containing pictures of four familiar objects and clicked on a target embedded in a spoken instruction. In the cohort condition, two object names shared identical three-phoneme onsets. In the noncohort condition, all object names had unique onsets. Coarse-grain analyses of eye movements indicated that adults produced looks to the competitor on significantly more cohort trials than on noncohort trials, whereas children surprisingly failed to demonstrate cohort competition due to widespread exploratory eye movements across conditions. Fine-grain analyses, in contrast, showed a similar time course of eye movements across children and adults, but with cohort competition lingering more than 1s longer in children. The dissociation between coarse-grain and fine-grain eye movements indicates a need to consider multiple behavioral measures in making developmental comparisons in language processing.  相似文献   

15.
16.
When people write about their deepest thoughts and feelings about an emotionally significant event, numerous benefits in many domains (e.g., health, achievement, and well-being) result. As one step in understanding how writing achieves these effects, we have developed a computer program that provides a "fingerprint" of the words people use in writing or in natural settings. Analyses of text samples indicate that particular patterns of word use predict health and also reflect personality styles. We have also discovered that language use in the laboratory writing paradigm is associated with changes in social interactions and language use in the real world. The implications for using computer-based text analysis programs in the development of psychological theory are discussed.  相似文献   

17.
Rapid Gains in Speed of Verbal Processing by Infants in the 2nd Year   总被引:3,自引:0,他引:3  
Infants improve substantially in language ability during their 2nd year. Research on the early development of speech production shows that vocabulary begins to expand rapidly around the age of 18 months. During this period, infants also make impressive gains in understanding spoken language. We examined the time course of word recognition in infants from ages 15 to 24 months, tracking their eye movements as they looked at pictures in response to familiar spoken words. The speed and efficiency of verbal processing increased dramatically over the 2nd year. Although 15-month-old infants did not orient to the correct picture until after the target word was spoken, 24-month-olds were significantly faster, shifting their gaze to the correct picture before the end of the spoken word. By 2 years of age, children are progressing toward the highly efficient performance of adults, making decisions about words based on incomplete acoustic information.  相似文献   

18.
Gaze shifts and fixations appear to be proactive in both action execution and observation. We investigated a dependency of anticipatory gaze behaviour by using a block stacking task. Blocks were rectangles depicted on a computer screen and the stacking movements were controlled via computer mouse. Subjects either had to execute the task or had to observe it made by the experimenter, or by the computer. The dependency of gaze behaviour on the visibility of a virtual effector, the visibility of the actor, and the nature of the actor was tested by measuring eye movements. Anticipatory eye movements were predominant when the subjects themselves executed the task. During action observation, gaze behaviour did neither depend on the visibility nor depend on the nature of the actor. However, big variability was found between the subjects suggesting the use of two different strategies in action observation: some subjects were mainly tracking the blocks during stacking movements; others were strongly anticipating. We suggest that gaze behaviour during action observation is not predetermined by rigid neural circuitry, but strongly depends on the context. The possibility to explain the causal mechanism, as well as the ownership of the action may be crucial preconditions for anticipatory gaze behaviour.  相似文献   

19.
When participants follow spoken instructions to pick up and move objects in a visual workspace, their eye movements to the objects are closely time-locked to referential expressions in the instructions. Two experiments used this methodology to investigate the processing of the temporary ambiguities that arise because spoken language unfolds over time. Experiment 1 examined the processing of sentences with a temporarily ambiguous prepositional phrase (e.g., "Put the apple on the towel in the box") using visual contexts that supported either the normally preferred initial interpretation (the apple should be put on the towel) or the less-preferred interpretation (the apple is already on the towel and should be put in the box). Eye movement patterns clearly established that the initial interpretation of the ambiguous phrase was the one consistent with the context. Experiment 2 replicated these results using prerecorded digitized speech to eliminate any possibility of prosodic differences across conditions or experimenter demand. Overall, the findings are consistent with a broad theoretical framework in which real-time language comprehension immediately takes into account a rich array of relevant nonlinguistic context.  相似文献   

20.
When bilinguals process written language, they show delays in accessing lexical items relative to monolinguals. The present study investigated whether this effect extended to spoken language comprehension, examining the processing of sentences with either low or high semantic constraint in both first and second languages. English-German bilinguals, German-English bilinguals and English monolinguals listened for target words in spoken English sentences while their eye-movements were recorded. Bilinguals’ eye-movements reflected weaker lexical access relative to monolinguals; furthermore, the effect of semantic constraint differed across first versus second language processing. Specifically, English-native bilinguals showed fewer overall looks to target items, regardless of sentence constraint; German-native bilinguals activated target items more slowly and maintained target activation over a longer period of time in the low-constraint condition compared with monolinguals. No eye movements to cross-linguistic competitors were observed, suggesting that these lexical access disadvantages were present during bilingual spoken sentence comprehension even in the absence of overt interlingual competition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号