首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Among theories of human language comprehension, cue-based memory retrieval has proven to be a useful framework for understanding when and how processing difficulty arises in the resolution of long-distance dependencies. Most previous work in this area has assumed that very general retrieval cues like [+subject] or [+singular] do the work of identifying (and sometimes misidentifying) a retrieval target in order to establish a dependency between words. However, recent work suggests that general, handpicked retrieval cues like these may not be enough to explain illusions of plausibility (Cunnings & Sturt, 2018), which can arise in sentences like The letter next to the porcelain plate shattered. Capturing such retrieval interference effects requires lexically specific features and retrieval cues, but handpicking the features is hard to do in a principled way and greatly increases modeler degrees of freedom. To remedy this, we use well-established word embedding methods for creating distributed lexical feature representations that encode information relevant for retrieval using distributed retrieval cue vectors. We show that the similarity between the feature and cue vectors (a measure of plausibility) predicts total reading times in Cunnings and Sturt’s eye-tracking data. The features can easily be plugged into existing parsing models (including cue-based retrieval and self-organized parsing), putting very different models on more equal footing and facilitating future quantitative comparisons.  相似文献   

2.
During the temporal delay between the filler and gap sites in long-distance dependencies, the “active filler” strategy can be implemented in two ways: the filler phrase can be actively maintained in working memory (“maintenance account”), or it can be retrieved only when the parser posits a gap (“retrieval account”). The current study tested whether filler content is maintained during the processing of dependencies. Using a self-paced reading paradigm, we compared reading times on a noun phrase (NP) between the filler and gap sites in object relative clauses, to reading times on an NP between the antecedent and ellipsis sites in ellipsis sentences. While in the former type of dependency a filler by hypothesis can be maintained, in the latter there is no indication for the existence of a dependency prior to the ellipsis site, and hence no maintenance. By varying the amount of similarity-based interference between the antecedent and integration sites, we tested the influence of holding an unresolved dependency on reading times. Significantly increased reading times due to interference were found only in the object relative condition, and not in the ellipsis condition, demonstrating filler maintenance costs. The fact that these costs were measured as an effect on similarity-based interference indicates that the maintained representation of the filler must include at least some of the features shared by the interfering NP.  相似文献   

3.
Experimental research shows that human sentence processing uses information from different levels of linguistic analysis, for example, lexical and syntactic preferences as well as semantic plausibility. Existing computational models of human sentence processing, however, have focused primarily on lexico-syntactic factors. Those models that do account for semantic plausibility effects lack a general model of human plausibility intuitions at the sentence level. Within a probabilistic framework, we propose a wide-coverage model that both assigns thematic roles to verb–argument pairs and determines a preferred interpretation by evaluating the plausibility of the resulting ( verb , role , argument ) triples. The model is trained on a corpus of role-annotated language data. We also present a transparent integration of the semantic model with an incremental probabilistic parser. We demonstrate that both the semantic plausibility model and the combined syntax/semantics model predict judgment and reading time data from the experimental literature.  相似文献   

4.
Structural reanalysis is generally assumed to be representation-preserving, whereby the initial analysis is manipulated or repaired to arrive at a new structure. This paper contends that the theoretical and empirical basis for such approaches is weak. A conceptually simpler alternative is that the processor reprocesses (some portion of) the input using just those structure-building operations available in first-pass parsing. This reprocessing is a necessary component of any realistic processing model. By contrast, the structural revisions required for second-pass repair are more powerful than warranted by the abilities of the first-pass parser. This paper also reviews experimental evidence for repair presented by Sturt, Pickering, and Crocker (1999). We demonstrate that the Sturt et al. findings are consistent with a reprocessing account and present a self-paced reading experiment intended to tease apart the repair and reprocessing accounts. The results support a reprocessing interpretation of Sturt et al.'s data, rendering a repair-based explanation superfluous.  相似文献   

5.
Memory limitations and probabilistic expectations are two key factors that have been posited to play a role in the incremental processing of natural language. Relative clauses (RCs) have long served as a key proving ground for such theories of language processing. Across three self-paced reading experiments, we test the online comprehension of Hungarian subject- and object-extracted RCs (SRCs and ORCs, respectively). We capitalize on the syntactic properties of Hungarian that allow for a variety of word orders within RCs, which helps us to delineate the processing costs associated with memory demand and violated expectations. Results showed a processing cost at the RC verb for structures that have longer verb-argument distances, despite those structures being more frequent in the corpus. These findings thus support theories that attribute processing difficulty to memory limitations, rather than theories that attribute difficulty to less expected structures.  相似文献   

6.
All other things being equal the parser favors attaching an ambiguous modifier to the most recent possible site. A plausible explanation is that locality preferences such as this arise in the service of minimizing memory costs—more distant sentential material is more difficult to reactivate than more recent material. Note that processing any sentence requires linking each new lexical item with material in the current parse. This often involves the construction of long-distance dependencies. Under a resource-limited view of language processing, lengthy integrations should induce difficulty even in unambiguous sentences. To date there has been little direct quantitative evidence in support of this perspective. This article presents 2 self-paced reading studies, which explore the hypothesis that dependency distance is a fundamental determinant of reading complexity in unambiguous constructions in English. The evidence suggests that the difficulty associated with integrating a new input item is heavily determined by the amount of lexical material intervening between the input item and the site of its target dependents. The patterns observed here are not straightforwardly accounted for within purely experience-based models of complexity. Instead, this work supports the role of a memory bottleneck in language comprehension. This constraint arises because hierarchical linguistic relations must be recovered from a linear input stream.  相似文献   

7.
文本阅读中, 读者往往对事件的后续发展进行预期推理。预期推理有两种倾向, 要么是倾向于根据客观现实条件进行的现实预期, 要么是倾向于根据主观的个人意愿进行的意愿预期。3个实验通过自定步调阅读范式探讨了文本阅读中读者产生的现实预期和意愿预期的保持。结果发现, 现实预期和意愿预期在长时记忆中的保持情况差异明显, 现实预期不能在长时记忆中保持, 而意愿预期则能保持; 但意愿预期也不能单独存在, 会受到现实条件的制约, 在受到现实否定后即时消退, 不再影响读者的进一步阅读。  相似文献   

8.
The goal of the present set of experiments was to examine whether a cue-based mechanism could account for how, and under what conditions, spatial information is tracked. In five experiments, reading times were measured for a target sentence that contradicted the earlier-described location of a protagonist. When the target sentence contained either one or two cues to earlier spatial information (Experiments 1a-1c), reading times were disrupted. When all cues were eliminated (Experiments 2a and 2b), reading time were disrupted only when readers were instructed to take the perspective of the protagonist. The combined results of all five experiments are consistent with a cue-based mechanism: Readers encode spatial information but do not update earlier-encoded spatial information except in response to specific text characteristics (i.e., cues to earlier spatial information) or task demands (e.g., an instruction to read from the perspective of the protagonist) that increase the accessibility of earlier-encoded spatial information.  相似文献   

9.
Four experiments were conducted to investigate whether semantic activation of a concept spreads to phonologically and graphemically related concepts. In lexical decision or self-paced reading tasks, subjects responded to pairs of words that were semantically related (e.g., light-lamp), that rhymed (e.g., lamp-damp), or that combined both of these relations through a mediating word (e.g., light-damp). In one version of each task, test lists contained word-word pairs (e.g., light-lamp) as well as nonword-word (e.g., pown-table) and word-nonword pairs (e.g., month-poad); in another version, test lists contained only word-word pairs. The lexical decision and self-paced reading tasks were facilitated by semantic and rhyming relations regardless of the presence or absence of nonwords on the test lists. The effect of the mediated relation, however, depended on the presence of nonwords among the stimuli. When only words were included, there was no effect of the mediated relation, but when nonwords were included, lexical decision and self-paced reading responses were inhibited by the mediated relation. These inhibitory effects are attributed to processes occurring after lexical access, and the relative advantages of the self-paced reading task are discussed.  相似文献   

10.
Two dual-task experiments (replications of Experiments 1 and 2 in Fedorenko, Gibson, & Rohde, Journal of Memory and Language, 56, 246–269 2007) were conducted to determine whether syntactic and arithmetical operations share working memory resources. Subjects read object- or subject-extracted relative clause sentences phrase by phrase in a self-paced task while simultaneously adding or subtracting numbers. Experiment 2 measured eye fixations as well as self-paced reaction times. In both experiments, there were main effects of syntax and of mathematical operation on self-paced reading times, but no interaction of the two. In the Experiment 2 eye-tracking results, there were main effects of syntax on first-pass reading time and total reading time and an interaction between syntax and math in total reading time on the noun phrase within the relative clause. The findings point to differences in the ways individuals process sentences under these dual-task conditions, as compared with viewing sentences during “normal” reading conditions, and do not support the view that arithmetical and syntactic integration operations share a working memory system.  相似文献   

11.
Evidence from 3 experiments reveals interference effects from structural relationships that are inconsistent with any grammatical parse of the perceived input. Processing disruption was observed when items occurring between a head and a dependent overlapped with either (or both) syntactic or semantic features of the dependent. Effects of syntactic interference occur in the earliest online measures in the region where the retrieval of a long-distance dependent occurs. Semantic interference effects occur in later online measures at the end of the sentence. Both effects endure in offline comprehension measures, suggesting that interfering items participate in incorrect interpretations that resist reanalysis. The data are discussed in terms of a cue-based retrieval account of parsing, which reconciles the fact that the parser must violate the grammar in order for these interference effects to occur. Broader implications of this research indicate a need for a precise specification of the interface between the parsing mechanism and the memory system that supports language comprehension.  相似文献   

12.
This work presents an analysis of the role of animacy in attachment preferences of relative clauses to complex noun phrases in European Portuguese (EP). The study of how the human parser solves this kind of syntactic ambiguities has been focus of extensive research. However, what is known about EP is both limited and puzzling. Additionally, as recent studies have stressed the importance of extra-syntactic variables in this process, two experiments were carried out to assess EP attachment preferences considering four animacy conditions: Study 1 used a sentence-completion-task, and Study 2 a self-paced reading task. Both studies indicate a significant preference for high attachment in EP. Furthermore, they showed that this preference was modulated by the animacy of the host NP: if the first host was inanimate and the second one was animate, the parser's preference changed to low attachment preference. These findings shed light on previous results regarding EP and strengthen the idea that, even in early stages of processing, the parser seems to be sensitive to extra-syntactic information.  相似文献   

13.
In this article, we validate an experimental paradigm, SPaM, that we first described elsewhere (Luke & Christianson, Memory & Cognition 40:628–641, 2012). SPaM is a synthesis of self-paced reading and masked priming. The primary purpose of SPaM is to permit the study of sentence context effects on early word recognition. In the experiment reported here, we show that SPaM successfully reproduces results from both the self-paced reading and masked-priming literatures. We also outline the advantages and potential uses of this paradigm. For users of E-Prime, the experimental program can be downloaded from our lab website, http://epl.beckman.illinois.edu/.  相似文献   

14.
The prominent cognitive theories of probability judgment were primarily developed to explain cognitive biases rather than to account for the cognitive processes in probability judgment. In this article the authors compare 3 major theories of the processes and representations in probability judgment: the representativeness heuristic, implemented as prototype similarity, relative likelihood, or evidential support accumulation (ESAM; D. J. Koehler, C. M. White, & R. Grondin, 2003); cue-based relative frequency; and exemplar memory, implemented by probabilities from exemplars (PROBEX; P. Juslin & M. Persson, 2002). Three experiments with different task structures consistently demonstrate that exemplar memory is the best account of the data whereas the results are inconsistent with extant formulations of the representativeness heuristic and cue-based relative frequency.  相似文献   

15.
The allocation of processing resources during spoken discourse comprehension was studied in a manner analogous to self-paced reading using the auditory moving window technique (Ferreira, Henderson, Anes, Weeks, & McFarlane, 1996). Young and older participants listened to spoken passages in a self-paced segment-by-segment fashion. In Experiment 1, we examined the influence of speech rate and passage complexity on discourse encoding and recall performance. In Experiment 2, we examined the influence of speech rate and presentation mode (self-paced vs. full-passage presentation) on recall performance. Results suggest that diminished memory performance in the older adult group relative to the young adult group is attributable to age-related differences in how resources were allocated during the initial encoding of the spoken discourse.  相似文献   

16.
Demberg V  Keller F 《Cognition》2008,109(2):193-210
We evaluate the predictions of two theories of syntactic processing complexity, dependency locality theory (DLT) and surprisal, against the Dundee Corpus, which contains the eye-tracking record of 10 participants reading 51,000 words of newspaper text. Our results show that DLT integration cost is not a significant predictor of reading times for arbitrary words in the corpus. However, DLT successfully predicts reading times for nouns. We also find evidence for integration cost effects at auxiliaries, not predicted by DLT. For surprisal, we demonstrate that an unlexicalized formulation of surprisal can predict reading times for arbitrary words in the corpus. Comparing DLT integration cost and surprisal, we find that the two measures are uncorrelated, which suggests that a complete theory will need to incorporate both aspects of processing complexity. We conclude that eye-tracking corpora, which provide reading time data for naturally occurring, contextualized sentences, can complement experimental evidence as a basis for theories of processing complexity.  相似文献   

17.
Although Internet-based experiments are gaining in popularity, most studies rely on directly evaluating participants’ responses rather than response times. In the present article, we present two experiments that demonstrate the feasibility of collecting response latency data over the World-Wide Web using WebExp—a software package designed to run psychological experiments over the Internet. Experiment 1 uses WebExp to collect measurements for known time intervals (generated using keyboard repetition). The resulting measurements are found to be accurate across platforms and load conditions. In Experiment 2, we use WebExp to replicate a lab-based self-paced reading study from the psycholinguistic literature. The data of the Web-based replication correlate significantly with those of the original study and show the same main effects and interactions. We conclude that WebExp can be used to obtain reliable response time data, at least for the self-paced reading paradigm.  相似文献   

18.
We investigated whether readers use verb information to aid in their initial parsing of temporarily ambiguous sentences. In the first experiment, subjects' eye movements were recorded. In the second and third experiments, subjects read sentences by using a noncumulative and cumulative word-by-word self-paced paradigm, respectively. The results of the first two experiments supported Frazier and Rayner's (1982) garden-path model of sentence comprehension: Verb information did not influence the initial operation of the parser. The third experiment indicated that the cumulative version of the self-paced paradigm is not appropriate for studying on-line parsing. We conclude that verb information is not used by the parser to modify its initial parsing strategies, although it may be used to guide subsequent reanalysis.  相似文献   

19.
Many comprehension theories assert that increasing the distance between elements participating in a linguistic relation (e.g., a verb and a noun phrase argument) increases the difficulty of establishing that relation during on-line comprehension. Such locality effects are expected to increase reading times and are thought to reveal properties and limitations of the short-term memory system that supports comprehension. Despite their theoretical importance and putative ubiquity, however, evidence for on-line locality effects is quite narrow linguistically and methodologically: It is restricted almost exclusively to self-paced reading of complex structures involving a particular class of syntactic relation. We present 4 experiments (2 self-paced reading and 2 eyetracking experiments) that demonstrate locality effects in the course of establishing subject-verb dependencies; locality effects are seen even in materials that can be read quickly and easily. These locality effects are observable in the earliest possible eye-movement measures and are of much shorter duration than previously reported effects. To account for the observed empirical patterns, we outline a processing model of the adaptive control of button pressing and eye movements. This model makes progress toward the goal of eliminating linking assumptions between memory constructs and empirical measures in favor of explicit theories of the coordinated control of motor responses and parsing.  相似文献   

20.
Two self-paced reading experiments using a paraphrase decision task paradigm were performed to investigate how sentence complexity contributed to the relative clause (RC) attachment preferences of speakers of different working memory capacities (WMCs). Experiment 1 (English) showed working memory effects on relative clause processing in both offline RC attachment preferences and in online reading time measures, but no effects of syntactic complexity. In Experiment 2 (Korean), syntactic complexity due to greater distance between integrating heads, as measured by the dependency locality theory (Gibson in Cognition 68:1–76, 1998), significantly increased the proportion of attachment to NP1. However, no effects of working memory were found. The difference in results between English and Korean is proposed to be due to head-directionality effects. The results of our study support the conclusion that working memory-based accounts provide a better explanation than previous language-dependent accounts for differences in RC attachment preferences. We propose that previous language dependent-accounts of cross-linguistic differences in RC processing have overlooked the interaction between individual WMC and a language’s general structure, which is a central factor in RC attachment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号