首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Subjects read and recalled 12 short texts in a memory recall experiment. The order in which subjects recalled the propositions in the text was recorded. A causal network analysis of each text was then done in order to determine how the propositions in each text were causally related. In addition, an episodic memory network analysis of each text was done in order to represent the original order of propositions presented to each subject in the experiment. The human text recall data were then analyzed using a new statistical methodology known as thetemporal Markov field (TMF) approach, which makes explicit probabilistic predictions about the ordering of propositions in human subject recall protocols in terms of the causal network and episodic memory network analysis of a given text. Samples from the TMF probability model were then used to generate synthetic protocol data using half of the human subject data. Statistics computed with respect to the remaining half of the human subject data and the synthesized protocol data were qualitatively similar in many respects. Relevant discrepancies between the human protocol data and synthesized protocol data were also identified.  相似文献   

2.
English texts were constructed from propositional bases. One set of 16-word sentences was obtained from semantic bases containing from 4 to 9 propositions. For another set of sentences and paragraphs, number of words and number of propositions covaried. Subjects read the texts at their own rate and recalled them immediately. For the 16-word sentences, subjects needed 1.5 sec additional reading time to process each proposition. For longer texts, this value increased. In another experimental condition reading time was controlled by the experimenter. The analysis of both the text and the recall protocols in terms of number of propositions lent support to the notion that propositions are a basic unit of memory for text. However, evidence was also obtained that while the total number of propositions upon which a text was based proved to be an effective psychological variable, all propositions were not equally difficult to remember: superordinate propositions were recalled better than propositions which were stucturally subordinate.  相似文献   

3.
4.
Titles can alter the comprehension of a text by affecting the selection of information from a text and the organization of this information in memory. Text comprehension is assumed to involve an organizational process that results in the formation of a text base, an ordered list of semantic units—propositions. The text base can be used as a retrieval scheme to reconstruct the text. Procedures for assigning propositions as more relevant to some themes as compared to other themes are developed and applied to texts. Texts with biasing titles were used in an experiment to demonstrate that immediate free recall is biased toward the theme emphasized in the title. The comprehension process which is guided by the text’s thematical information is described.  相似文献   

5.
Many psychological theories of semantic cognition assume that concepts are represented by features. The empirical procedures used to elicit features from humans rely on explicit human judgments which limit the scope of such representations. An alternative computational framework for semantic cognition that does not rely on explicit human judgment is based on the statistical analysis of large text collections. In the topic modeling approach, documents are represented as a mixture of learned topics where each topic is represented as a probability distribution over words. We propose feature-topic models, where each document is represented by a mixture of learned topics as well as predefined topics that are derived from feature norms. Results indicate that this model leads to systematic improvements in generalization tasks. We show that the learned topics in the model play in an important role in the generalization performance by including words that are not part of current feature norms.  相似文献   

6.
A localist, parallel constraint satisfaction, artificial neural network model is presented that accounts for a broad collection of attitude and attitude-change phenomena. The network represents the attitude object and cognitions and beliefs related to the attitude, as well as how to integrate a persuasive message into this network. Short-term effects are modeled by activation patterns due to parallel constraint satisfaction processes, and long-term effects are modeled by weight changes due to the settling patterns of activation. Phenomena modeled include thought-induced attitude polarization, elaboration and attitude strength, motivated reasoning and social influence, an integrated view of heuristic versus systematic persuasion, and implicit versus explicit attitude change. Results of the simulations are consistent with empirical results. The same set of simple mechanisms is used to model all the phenomena, which allows the model to offer a parsimonious theoretical account of how structure can impact attitude change. This model is compared with previous computational approaches to attitudes, and implications for attitude research are discussed.  相似文献   

7.
《认知与教导》2013,31(2):143-175
PREG is a conceptual model of human question asking. The model contains a set of production rules that specify the conditions under which children and adults ask questions when they read expository texts. The essence of PREG's question-asking mechanism is the existence of discrepancies between the representation of text information and the reader's world knowledge, with a mediating role of pragmatics and metacognition. Both the explicit text and the world knowledge are represented in the form of a conceptual graph structure. Comparisons between text representations and readers' knowledge are carried out by examining the 3 components of conceptual graph structures: words, statements, and links between statements. Some of the predictions of PREG were tested on a corpus of questions generated by 8th-grade and 12th-grade students who read short scientific texts. These predictions were empirically supported when assessed on 2 criteria. First, the model was sufficient because it was able to account for nearly all of the questions produced by the students. Second, the model was discriminating; when signal detection analyses were applied to the data, PREG could identify the conditions in which particular classes of questions are or are not generated.  相似文献   

8.
This article provides an overview of a probabilistic constraints framework for thinking about language acquisition and processing. The generative approach attempts to characterize knowledge of language (i.e., competence grammar) and then asks how this knowledge is acquired and used. Our approach is performance oriented: the goal is to explain how people comprehend and produce utterances and how children acquire this skill. Use of language involves exploiting multiple probabilistic constraints over various types of linguistic and nonlinguistic information. Acquisition is the process of accumulating this information, which begins in infancy. The constraint satisfaction processes that are central to language use are the same as the bootstrapping processes that provide entry to language for the child. Framing questions about acquisition in terms of models of adult performance unifies the two topics under a set of common principles and has important consequences for arguments concerning language learnability.  相似文献   

9.
Latent semantic analysis (LSA) is a statistical model of word usage that permits comparisons of semantic similarity between pieces of textual information. This paper summarizes three experiments that illustrate how LSA may be used in text-based research. Two experiments describe methods for analyzing a subject’s essay for determining from what text a subject learned the information and for grading the quality of information cited in the essay. The third experiment describes using LSA to measure the coherence and comprehensibility of texts.  相似文献   

10.
11.
This paper develops a compositional, type-driven constraint semantic theory for a fragment of the language of subjective uncertainty. In the particular application explored here, the interpretation function of constraint semantics yields not propositions but constraints on credal states as the semantic values of declarative sentences. Constraints are richer than propositions in that constraints can straightforwardly represent assessments of the probability that the world is one way rather than another. The richness of constraints helps us model communicative acts in essentially the same way that we model agents’ credences. Moreover, supplementing familiar truth-conditional theories of epistemic modals with constraint semantics helps capture contrasts between strong necessity and possibility modals, on the one hand, and weak necessity modals, on the other.  相似文献   

12.
Research in strategy use needs to provide comprehensive and detailed qualitative discussion of individual cases and their strategic processing of texts to deepen our understanding of the cognitive and metacognitive processes readers resort to when reading different texts for different purposes. Hence, the present paper aims to provide in-depth and rich examination and interpretation of four Saudi EFL students (two good and two poor readers) processing two different texts in structure (a narrative and an expository text) for different reading tasks. This involved a detailed qualitative discussion of the differences and similarities in reading problems and strategy use. Using think-aloud reports and follow-up interviews, the study identified four explicit word-related and six text-related problems reported by the four EFL readers in processing the assigned text types. Moreover, the study revealed how the selected cases varied in their strategic reactions to the reading problems encountered in the two texts. The study findings also demonstrated how the structuring variations of the narrative and expository texts had a considerable effect on the quantity and quality of readers' reported difficulties and reading strategies employed in texts.  相似文献   

13.
A general model of problem-solving processes based on misconception elimination is presented to simulate both impasses and solving processes. The model operates on goal-related rules and a set of constraint rules in the form of “if (state or goal), do not (Action)” for the explicit constraints in the instructions and the implicit constraints that come from misconceptions of legal moves. When impasses occur, a constraint elimination mechanism is applied. Because successive eliminations of implicit constraints enlarge the problem space and have an effect on planning, the model integrates “plan-based” and “constraint-based” approaches to problem-solving behavior. Simulating individual protocols of Tower of Hanoi situations shows that the model, which has a proper set of constraints, predicts a single move with no alternative on about 61% of the movements and that protocols are quite successfully simulated movement by movement. Finally, it is shown that many features of previous models are embedded in the constraint elimination model.  相似文献   

14.
This study examines the effects of feedback specificity on transfer of training and the mechanisms through which feedback can enhance or inhibit transfer. We used concurrent verbal protocol methodology to elicit and operationalize the explicit information processing activities used by 48 trainees performing the Furniture Factory computer simulation. We hypothesized and found support for a moderated mediation model. Increasing feedback specificity influenced the exposure trainees had to different task conditions and negatively affected their levels of explicit information processing. In turn, explicit information processes and levels of exposure to different task conditions interacted to impact transfer of training. Those who received less specific feedback relied more heavily on explicit information processing and had more exposure to the challenging aspects of the task than those who received more specific feedback, which differentially affected what they learned about the task. We discuss how feedback specificity and exposure to different task conditions may prime different learning processes.  相似文献   

15.
Two studies are reported that tested the assumption that learning is improved by presenting text and pictures compared to text only when the text conveys non-spatial rather than spatial information. In Experiment 1, 59 students learned with text containing either visual or spatial contents, both accompanied by the same pictures. The results confirmed the expected interference between the processing of spatial text contents and pictures: Learners who received text containing spatial information showed worse text and picture recall than learners who received text containing visual information. In Experiment 2, 85 students were randomly assigned to one of four conditions, which resulted from a 2×2 between-participants design, with picture presentation (with vs without) and text contents (visual vs spatial) as between-participants factors. Again the results confirmed the expected interference between processing of spatial text information and pictures, because beneficial effects of adding pictures to text were observed only when the texts conveyed visual information. Importantly, when no pictures were present no differences were observed between learners with either visual or spatial texts contents, indicating that the observed effects are not caused by absolute differences between the two texts such as their difficulty. The implications of these results are discussed.  相似文献   

16.
The goal of this investigation was to determine which reading instruction improves multiple science text comprehension for college student readers. The authors first identified the cognitive processing strategies that are predictive of multiple science text comprehension (Study 1) and then used what they learned to experimentally test the effectiveness of explicit pre-reading instructions (Study 2). Study 1 showed that self-explaining was positively related to comprehension tasks. Study 2 showed that explicitly instructing participants to self-explain while reading multiple science texts enhanced comprehension test performance. These results showed that self-explanation during reading is a successful strategy for enhancing multiple science text comprehension.  相似文献   

17.
构成理论认为,读者在阅读记叙文篇章时,试图构成一个有意义的情景参照模式以表达读者的目标、篇章连贯性以及解释文中为何提到所描述的行为、事件及情景,其中推论生成是一个重要环节。而推论可分为十三类,有六类属线上生成,五类属脱线生成,有两类难以确定.需考虑语用因素,其中有些推论因读者目标不同而有特殊性。构成理论还认为,除了局部推论和整体推论外,其它推论都与读者的世界知识有关。该者目标满足,局部与整体连贯实现,以及外显信息解释决定着对记叙文篇章的理解。  相似文献   

18.
阅读中的元理解监测与元理解调控   总被引:1,自引:0,他引:1  
陈启山 《心理学报》2009,41(8):676-683
采用“阅读-关键词处理-元理解监测-测验1-选择文章重读-测验2”的流程, 探讨元理解监测对元理解调控与阅读理解成绩的影响。结果显示, 延迟写关键词相对于即时写和不写关键词更利于监测精 确性的提高; 延迟关键词组借助精确的监测能做出有效的元理解调控, 选出测验1中得分低的文章重读, 并在测验2中有较好表现, 而其他两组只能选出其认为难的、而非得分低的文章重读, 在测验2中表现不 佳。元理解监测的精确性影响元理解调控的有效性, 进而影响阅读理解成绩。  相似文献   

19.
In noun compounds in English, the modifying noun may be singular (mouse-eater) or an irregularly inflected plural (mice-eater), but regularly inflected plurals are dispreferred (*rats-eater). This phenomenon has been taken as strong evidence for dual-mechanism theories of lexical representations, which hold that regular (rule-governed) and irregular (exception) items are generated by qualitatively different and innately specified mechanisms. Using corpus analyses, behavioral studies, and computational modeling, we show that the rule-versus-exceptions approach makes a number of incorrect predictions. We propose a new account in which the acceptability of modifiers is determined by a constraint satisfaction process modulated by semantic, phonological, and other factors. The constraints are acquired by the child via general purpose learning algorithms, based on noun compounds and other constructions in the input. The account obviates the regular/irregular dichotomy while simultaneously providing a superior account of the data.  相似文献   

20.
Semantic associations and elaborative inference   总被引:2,自引:0,他引:2  
In this article, a theoretical framework is proposed for the inference processes that occur during reading. According to the framework, inferences can vary in the degree to which they are encoded. This notion is supported by three experiments in this article that show that degree of encoding can depend on the amount of semantic-associative information available to support the inference processes. In the experiments, test words that express possible inferences from texts are presented for recognition. When testing is delayed, with other texts and test items intervening between a text and its test word, performance depends on the amount of semantic-associative information in the text. If the inferences represented by the test words are not supported by semantic associates in the text, they appear to be only minimally encoded (replicating McKoon & Ratcliff, 1986), but if they are supported by semantic associates, they are strongly encoded. With immediate testing, only 250 ms after the text, performance is shown to depend on semantic-associative information, not on textual information. This suggests that it is the fast availability of semantic information that allows it to support inference processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号