首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10篇
  免费   4篇
  2021年   2篇
  2020年   1篇
  2019年   1篇
  2016年   3篇
  2015年   1篇
  2011年   2篇
  2010年   1篇
  2008年   1篇
  2006年   1篇
  2005年   1篇
排序方式: 共有14条查询结果,搜索用时 15 毫秒
1.
Information protocols (IP’s) were developed to describe players who learn their social situation by their experiences. Although IP’s look similar to colored multi-graphs (MG’s), the two objects are constructed in fundamentally different ways. IP’s are constructed using the global concept of history, whereas graphs are constructed using the local concept of edges. We give necessary and sufficient conditions for each theory to be captured by the other. We find that the necessary and sufficient condition for IP theory to be captured by MG theory, which we call SE, excludes relevant game situations. Hence, we conclude that IP theory remains a vital tool and cannot be replaced by MG theory.  相似文献   
2.
Scanpaths have played an important role in classic research on reading behavior. Nevertheless, they have largely been neglected in later research perhaps due to a lack of suitable analytical tools. Recently, von der Malsburg and Vasishth (2011) proposed a new measure for quantifying differences between scanpaths and demonstrated that this measure can recover effects that were missed with the traditional eyetracking measures. However, the sentences used in that study were difficult to process and scanpath effects accordingly strong. The purpose of the present study was to test the validity, sensitivity, and scope of applicability of the scanpath measure, using simple sentences that are typically read from left to right. We derived predictions for the regularity of scanpaths from the literature on oculomotor control, sentence processing, and cognitive aging and tested these predictions using the scanpath measure and a large database of eye movements. All predictions were confirmed: Sentences with short words and syntactically more difficult sentences elicited more irregular scanpaths. Also, older readers produced more irregular scanpaths than younger readers. In addition, we found an effect that was not reported earlier: Syntax had a smaller influence on the eye movements of older readers than on those of young readers. We discuss this interaction of syntactic parsing cost with age in terms of shifts in processing strategies and a decline of executive control as readers age. Overall, our results demonstrate the validity and sensitivity of the scanpath measure and thus establish it as a productive and versatile tool for reading research.  相似文献   
3.
Individuals with agrammatic Broca's aphasia experience difficulty when processing reversible non‐canonical sentences. Different accounts have been proposed to explain this phenomenon. The Trace Deletion account (Grodzinsky, 1995, 2000, 2006) attributes this deficit to an impairment in syntactic representations, whereas others (e.g., Caplan, Waters, Dede, Michaud, & Reddy, 2007; Haarmann, Just, & Carpenter, 1997) propose that the underlying structural representations are unimpaired, but sentence comprehension is affected by processing deficits, such as slow lexical activation, reduction in memory resources, slowed processing and/or intermittent deficiency, among others. We test the claims of two processing accounts, slowed processing and intermittent deficiency, and two versions of the Trace Deletion Hypothesis (TDH), in a computational framework for sentence processing (Lewis & Vasishth, 2005) implemented in ACT‐R (Anderson, Byrne, Douglass, Lebiere, & Qin, 2004). The assumption of slowed processing is operationalized as slow procedural memory, so that each processing action is performed slower than normal, and intermittent deficiency as extra noise in the procedural memory, so that the parsing steps are more noisy than normal. We operationalize the TDH as an absence of trace information in the parse tree. To test the predictions of the models implementing these theories, we use the data from a German sentence—picture matching study reported in Hanne, Sekerina, Vasishth, Burchert, and De Bleser (2011). The data consist of offline (sentence‐picture matching accuracies and response times) and online (eye fixation proportions) measures. From among the models considered, the model assuming that both slowed processing and intermittent deficiency are present emerges as the best model of sentence processing difficulty in aphasia. The modeling of individual differences suggests that, if we assume that patients have both slowed processing and intermittent deficiency, they have them in differing degrees.  相似文献   
4.
We present a detailed process theory of the moment-by-moment working-memory retrievals and associated control structure that subserve sentence comprehension. The theory is derived from the application of independently motivated principles of memory and cognitive skill to the specialized task of sentence parsing. The resulting theory construes sentence processing as a series of skilled associative memory retrievals modulated by similarity-based interference and fluctuating activation. The cognitive principles are formalized in computational form in the Adaptive Control of Thought–Rational (ACT–R) architecture, and our process model is realized in ACT–R. We present the results of 6 sets of simulations: 5 simulation sets provide quantitative accounts of the effects of length and structural interference on both unambiguous and garden-path structures. A final simulation set provides a graded taxonomy of double center embeddings ranging from relatively easy to extremely difficult. The explanation of center-embedding difficulty is a novel one that derives from the model' complete reliance on discriminating retrieval cues in the absence of an explicit representation of serial order information. All fits were obtained with only 1 free scaling parameter fixed across the simulations; all other parameters were ACT–R defaults. The modeling results support the hypothesis that fluctuating activation and similarity-based interference are the key factors shaping working memory in sentence processing. We contrast the theory and empirical predictions with several related accounts of sentence-processing complexity.  相似文献   
5.
Among theories of human language comprehension, cue-based memory retrieval has proven to be a useful framework for understanding when and how processing difficulty arises in the resolution of long-distance dependencies. Most previous work in this area has assumed that very general retrieval cues like [+subject] or [+singular] do the work of identifying (and sometimes misidentifying) a retrieval target in order to establish a dependency between words. However, recent work suggests that general, handpicked retrieval cues like these may not be enough to explain illusions of plausibility (Cunnings & Sturt, 2018), which can arise in sentences like The letter next to the porcelain plate shattered. Capturing such retrieval interference effects requires lexically specific features and retrieval cues, but handpicking the features is hard to do in a principled way and greatly increases modeler degrees of freedom. To remedy this, we use well-established word embedding methods for creating distributed lexical feature representations that encode information relevant for retrieval using distributed retrieval cue vectors. We show that the similarity between the feature and cue vectors (a measure of plausibility) predicts total reading times in Cunnings and Sturt’s eye-tracking data. The features can easily be plugged into existing parsing models (including cue-based retrieval and self-organized parsing), putting very different models on more equal footing and facilitating future quantitative comparisons.  相似文献   
6.
A central question in online human sentence comprehension is, “How are linguistic relations established between different parts of a sentence?” Previous work has shown that this dependency resolution process can be computationally expensive, but the underlying reasons for this are still unclear. This article argues that dependency resolution is mediated by cue‐based retrieval, constrained by independently motivated working memory principles defined in a cognitive architecture. To demonstrate this, this article investigates an unusual instance of dependency resolution, the processing of negative and positive polarity items, and confirms a surprising prediction of the cue‐based retrieval model: Partial‐cue matches—which constitute a kind of similarity‐based interference—can give rise to the intrusion of ungrammatical retrieval candidates, leading to both processing slow‐downs and even errors of judgment that take the form of illusions of grammaticality in patently ungrammatical structures. A notable achievement is that good quantitative fits are achieved without adjusting the key model parameters.  相似文献   
7.
We present a comprehensive empirical evaluation of the ACT‐R–based model of sentence processing developed by Lewis and Vasishth (2005) (LV05). The predictions of the model are compared with the results of a recent meta‐analysis of published reading studies on retrieval interference in reflexive‐/reciprocal‐antecedent and subject–verb dependencies (Jäger, Engelmann, & Vasishth, 2017). The comparison shows that the model has only partial success in explaining the data; and we propose that its prediction space is restricted by oversimplifying assumptions. We then implement a revised model that takes into account differences between individual experimental designs in terms of the prominence of the target and the distractor in memory‐ and context‐dependent cue‐feature associations. The predictions of the original and the revised model are quantitatively compared with the results of the meta‐analysis. Our simulations show that, compared to the original LV05 model, the revised model accounts for the data better. The results suggest that effects of prominence and variable cue‐feature associations need to be considered in the interpretation of existing empirical results and in the design and planning of future experiments. With regard to retrieval interference in sentence processing and to the broader field of psycholinguistic studies, we conclude that well‐specified models in tandem with high‐powered experiments are needed in order to uncover the underlying cognitive processes.  相似文献   
8.
Kline  Jeffrey  Luckraz  Shravan 《Synthese》2010,179(1):103-114

Information protocols (IP’s) were developed to describe players who learn their social situation by their experiences. Although IP’s look similar to colored multi-graphs (MG’s), the two objects are constructed in fundamentally different ways. IP’s are constructed using the global concept of history, whereas graphs are constructed using the local concept of edges. We give necessary and sufficient conditions for each theory to be captured by the other. We find that the necessary and sufficient condition for IP theory to be captured by MG theory, which we call SE, excludes relevant game situations. Hence, we conclude that IP theory remains a vital tool and cannot be replaced by MG theory.

  相似文献   
9.
An English double‐embedded relative clause from which the middle verb is omitted can often be processed more easily than its grammatical counterpart, a phenomenon known as the grammaticality illusion. This effect has been found to be reversed in German, suggesting that the illusion is language specific rather than a consequence of universal working memory constraints. We present results from three self‐paced reading experiments which show that Dutch native speakers also do not show the grammaticality illusion in Dutch, whereas both German and Dutch native speakers do show the illusion when reading English sentences. These findings provide evidence against working memory constraints as an explanation for the observed effect in English. We propose an alternative account based on the statistical patterns of the languages involved. In support of this alternative, a single recurrent neural network model that is trained on both Dutch and English sentences is shown to predict the cross‐linguistic difference in the grammaticality effect.  相似文献   
10.
Understanding a sentence requires a working memory of the partial products of comprehension, so that linguistic relations between temporally distal parts of the sentence can be rapidly computed. We describe an emerging theoretical framework for this working memory system that incorporates several independently motivated principles of memory: a sharply limited attentional focus, rapid retrieval of item (but not order) information subject to interference from similar items, and activation decay (forgetting over time). A computational model embodying these principles provides an explanation of the functional capacities and severe limitations of human processing, as well as accounts of reading times. The broad implication is that the detailed nature of cross-linguistic sentence processing emerges from the interaction of general principles of human memory with the specialized task of language comprehension.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号