首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We explored methods of using latent semantic analysis (LSA) to identify reading strategies in students' self-explanations that are collected as part of a Web-based reading trainer. In this study, college students self-explained scientific texts, one sentence at a time. ISA was used to measure the similarity between the self-explanations and semantic benchmarks (groups of words and sentences that together represent reading strategies). Three types of semantic benchmarks were compared: content words, exemplars, and strategies. Discriminant analyses were used to classify global and specific reading strategies using the LSA cosines. All benchmarks contributed to the classification of general reading strategies, but the exemplars did the best in distinguishing subtle semantic differences between reading strategies. Pragmatic and theoretical concerns of using LSA are discussed.  相似文献   

2.
We tested a computer-based procedure for assessing reader strategies that was based on verbal protocols that utilized latent semantic analysis (LSA). Students were given self-explanation—reading training (SERT), which teaches strategies that facilitate self-explanation during reading, such as elaboration based on world knowledge and bridging between text sentences. During a computerized version of SERT practice, students read texts and typed self-explanations into a computer after each sentence. The use of SERT strategies during this practice was assessed by determining the extent to which students used the information in the current sentence versus the prior text or world knowledge in their self-explanations. This assessment was made on the basis of human judgments and LSA. Both human judgments and LSA were remarkably similar and indicated that students who were not complying with SERT tended to paraphrase the text sentences, whereas students who were compliant with SERT tended to explain the sentences in terms of what they knew about the world and of information provided in the prior text context. The similarity between human judgments and LSA indicates that LSA will be useful in accounting for reading strategies in a Web-based version of SERT.  相似文献   

3.
The effectiveness of a domain-specific latent semantic analysis (LSA) in assessing reading strategies was examined. Students were given self-explanation reading training (SERT) and asked to think aloud after each sentence in a science text. Novice and expert human raters and two LSA spaces (general reading, science) rated the similarity of each think-aloud protocol to benchmarks representing three different reading strategies (minimal, local, and global). The science LSA space correlated highly with human judgments, and more highly than did the general reading space. Also, cosines from the science LSA spaces can distinguish between different levels of semantic similarity, but may have trouble in distinguishing local processing protocols. Thus, a domain-specific LSA space is advantageous regardless of the size of the space. The results are discussedin the context of applying the science LSA to a computer-based version of SERT that gives online feedback based on LSA cosines.  相似文献   

4.
Latent semantic analysis (LSA) is a statistical technique for representing word meaning that has been widely used for making semantic similarity judgments between words, sentences, and documents. In order to perform an LSA analysis, an LSA space is created in a two-stage procedure, involving the construction of a word frequency matrix and the dimensionality reduction of that matrix through singular value decomposition (SVD). This article presents LANSE, an SVD algorithm specifically designed for LSA, which allows extremely large matrices to be processed using off-the-shelf computer hardware.  相似文献   

5.
The hypothesis that patients with Alzheimer's disease (AD) have a disturbance in semantic processing was tested using a new lexical-priming task, threshold oral reading. Healthy elderly controls showed significant effects of priming for word pairs that are associatively related (words that reliably co-occur in word association tests) and for word pairs that are semantically related (high-frequency exemplars that belong to the same superordinate category but are not high-frequency associates). AD patients showed effects of priming for associatively related words but not for word pairs that are related only by shared semantic features. These results are consistent with the hypothesis that semantic processing is impaired in AD and suggest that independent networks of relationships among words and among concepts in semantic memory may be differentially disrupted with various forms of brain damage.  相似文献   

6.
In the present study, we tested a computer-based procedure for assessing very concise summaries (50 words long) of two types of text (narrative and expository) using latent semantic analysis (LSA) in comparison with the judgments of four human experts. LSA was used to estimate semantic similarity using six different methods: four holistic (summary-text, summary-summaries, summary-expert summaries, and pregraded-ungraded summary) and two componential (summary-sentence text and summary-main sentence text). A total of 390 Spanish middle and high school students (14–16 years old) and six experts read a narrative or expository text and later summarized it. The results support the viability of developing a computerized assessment tool using human judgments and LSA, although the correlation between human judgments and LSA was higher in the narrative text than in the expository, and LSA correlated more with human content ratings than with human coherence ratings. Finally, the holistic methods were found to be more reliable than the componential methods analyzed in this study.  相似文献   

7.
Latent semantic analysis (LSA) is a statistical model of word usage that permits comparisons of semantic similarity between pieces of textual information. This paper summarizes three experiments that illustrate how LSA may be used in text-based research. Two experiments describe methods for analyzing a subject’s essay for determining from what text a subject learned the information and for grading the quality of information cited in the essay. The third experiment describes using LSA to measure the coherence and comprehensibility of texts.  相似文献   

8.
Latent semantic analysis (LSA) is a computational model of human knowledge representation that approximates semantic relatedness judgments. Two issues are discussed that researchers must attend to when evaluating the utility of LSA for predicting psychological phenomena. First, the role of semantic relatedness in the psychological process of interest must be understood. LSA indices of similarity should then be derived from this theoretical understanding. Second, the knowledge base (semantic space) from which similarity indices are generated must contain “knowledge” that is appropriate to the task at hand. Proposed solutions are illustrated with data from an experiment in which LSA-based indices were generated from theoretical analysis of the processes involved in understanding two conflicting accounts of a historical event. These indices predict the complexity of subsequent student reasoning about the event, as well as hand-coded predictions generated from think-aloud protocols collected when students were reading the accounts of the event.  相似文献   

9.
How is the meaning of a word retrieved without interference from recently viewed words? The ROUSE theory of priming assumes a discounting process to reduce source confusion between subsequently presented words. As applied to semantic satiation, this theory predicted a loss of association between the lexical item and meaning. Four experiments tested this explanation in a speeded category-matching task. All experiments used lists of 20 trials that presented a cue word for 1 s followed by a target word. Randomly mixed across the list, 10 trials used cues drawn from the same category whereas the other 10 trials used cues from 10 other categories. In Experiments 1a and 1b, the cues were repeated category labels (FRUIT–APPLE) and responses gradually slowed for the repeated category. In Experiment 2, the cues were nonrepeated exemplars (PEAR–APPLE) and responses remained faster for the repeated category. In Experiment 3, the cues were repeated exemplars in a word matching task (APPLE–APPLE) and responses again remained faster for the repeated category.  相似文献   

10.
Latent semantic analysis (LSA) and transitional probability (TP), two computational methods used to reflect lexical semantic representation from large text corpora, were employed to examine the effects of word predictability on Chinese reading. Participants' eye movements were monitored, and the influence of word complexity (number of strokes), word frequency, and word predictability on different eye movement measures (first-fixation duration, gaze duration, and total time) were examined. We found influences of TP on first-fixation duration and gaze duration and of LSA on total time. The results suggest that TP reflects an early stage of lexical processing while LSA reflects a later stage.  相似文献   

11.
Meanings of words facilitate false acceptance as well as correct rejection of lures in recognition memory tests, depending on the experimental context. This suggests that semantic representations are both directly and indirectly (i.e., mediated by perceptual representations) used in remembering. Studies using memory conjunction errors (MCEs) paradigms, in which the lures consist of component parts of studied words, have reported semantic facilitation of rejection of the lures. However, attending to components of the lures could potentially cause this. Therefore, we investigated whether semantic overlap of lures facilitates MCEs using Japanese Kanji words in which a whole-word image is more concerned in reading. Experiments demonstrated semantic facilitation of MCEs in a delayed recognition test (Experiment 1), and in immediate recognition tests in which participants were prevented from using phonological or orthographic representations (Experiment 2), and the salient effect on individuals with high semantic memory capacities (Experiment 3). Additionally, analysis of the receiver operating characteristic suggested that this effect is attributed to familiarity-based memory judgement and phantom recollection. These findings indicate that semantic representations can be directly used in remembering, even when perceptual representations of studied words are available.  相似文献   

12.
Latent semantic analysis (LSA) is a model of knowledge representation for words. It works by applying dimension reduction to local co-occurrence data from a large collection of documents after performing singular value decomposition on it. When the reduction is applied, the system forms condensed representations for the words that incorporate higher order associations. The higher order associations are primarily responsible for any semantic similarity between words in LSA. In this article, a memory model is described that creates semantic representations for words that are similar in form to those created by LSA. However, instead of applying dimension reduction, the model builds the representations by using a retrieval mechanism from a well-known account of episodic memory.  相似文献   

13.
Semantic memory impairment is a common feature of dementia of the Alzheimer type (DAT). Recent research has shown that patients with DAT are more impaired (relative to non-demented controls) in generating exemplars from a particular semantic category (e.g., animals) than words beginning with a particular letter, exhibit an altered temporal dynamic during the production of category exemplars, are impaired on confrontation naming tasks and make predominantly superordinate or semantically related errors, consistently misidentify the same objects across a variety of semantic tasks, and have alterations in multi-dimensional scaling models of their semantic network that are indicative of a loss of concepts and associations. These results are consistent with the view that Alzheimer's disease results in a breakdown in the organization and structure of semantic knowledge as neurodegeneration spreads to the association cortices that presumably store semantic representations.  相似文献   

14.
In distributional semantics models (DSMs) such as latent semantic analysis (LSA), words are represented as vectors in a high-dimensional vector space. This allows for computing word similarities as the cosine of the angle between two such vectors. In two experiments, we investigated whether LSA cosine similarities predict priming effects, in that higher cosine similarities are associated with shorter reaction times (RTs). Critically, we applied a pseudo-random procedure in generating the item material to ensure that we directly manipulated LSA cosines as an independent variable. We employed two lexical priming experiments with lexical decision tasks (LDTs). In Experiment 1 we presented participants with 200 different prime words, each paired with one unique target. We found a significant effect of cosine similarities on RTs. The same was true for Experiment 2, where we reversed the prime-target order (primes of Experiment 1 were targets in Experiment 2, and vice versa). The results of these experiments confirm that LSA cosine similarities can predict priming effects, supporting the view that they are psychologically relevant. The present study thereby provides evidence for qualifying LSA cosine similarities not only as a linguistic measure, but also as a cognitive similarity measure. However, it is also shown that other DSMs can outperform LSA as a predictor of priming effects.  相似文献   

15.
In this study, we compared four expert graders with latent semantic analysis (LSA) to assess short summaries of an expository text. As is well known, there are technical difficulties for LSA to establish a good semantic representation when analyzing short texts. In order to improve the reliability of LSA relative to human graders, we analyzed three new algorithms by two holistic methods used in previous research (León, Olmos, Escudero, Cañas, &; Salmerón, 2006). The three new algorithms were (1) the semantic common network algorithm, an adaptation of an algorithm proposed by W. Kintsch (2001, 2002) with respect to LSA as a dynamic model of semantic representation; (2) a best-dimension reduction measure of the latent semantic space, selecting those dimensions that best contribute to improving the LSA assessment of summaries (Hu, Cai, Wiemer-Hastings, Graesser, &; McNamara, 2007); and (3) the Euclidean distance measure, used by Rehder et al. (1998), which incorporates at the same time vector length and the cosine measures. A total of 192 Spanish middle-grade students and 6 experts took part in this study. They read an expository text and produced a short summary. Results showed significantly higher reliability of LSA as a computerized assessment tool for expository text when it used a best-dimension algorithm rather than a standard LSA algorithm. The semantic common network algorithm also showed promising results.  相似文献   

16.
17.
Within the connectionist triangle model of reading aloud, interaction between semantic and phonological representations occurs for all words but is particularly important for correct pronunciation of lower frequency exception words. This framework therefore predicts that (a) semantic dementia, which compromises semantic knowledge, should be accompanied by surface dyslexia, a frequency-modulated deficit in exception word reading, and (b) there should be a significant relationship between the severity of semantic degradation and the severity of surface dyslexia. The authors evaluated these claims with reference to 100 observations of reading data from 51 cases of semantic dementia. Surface dyslexia was rampant, and a simple composite semantic measure accounted for half of the variance in low-frequency exception word reading. Although in 3 cases initial testing revealed a moderate semantic impairment but normal exception word reading, all of these became surface dyslexic as their semantic knowledge deteriorated further. The connectionist account attributes such cases to premorbid individual variation in semantic reliance for accurate exception word reading. These results provide a striking demonstration of the association between semantic dementia and surface dyslexia, a phenomenon that the authors have dubbed SD-squared.  相似文献   

18.
Experiment 1 examined whether the semantic transparency of an English unspaced compound word affected how long it took to process it in reading. Three types of opaque words were each compared with a matched set of transparent words (i.e. matched on the length and frequency of the constituents and the frequency of the word as a whole). Two sets of the opaque words were partially opaque: either the first constituent was not related to the meaning of the compound (opaque‐transparent) or the second constituent was not related to the meaning of the compound (transparent‐opaque). In the third set (opaque‐opaque), neither constituent was related to the meaning of the compound. For all three sets, there was no significant difference between the opaque and the transparent words on any eye‐movement measure. This replicates an earlier finding with Finnish compound words ( Pollatsek & Hyönä, 2005 ) and indicates that, although there is now abundant evidence that the component constituents play a role in the encoding of compound words, the meaning of the compound word is not constructed from the parts, at least for compound words for which a lexical entry exists. Experiment 2 used the same compounds but with a space between the constituents. This presentation resulted in a transparency effect, indicating that when an assembly route is ‘forced’, transparency does play a role.  相似文献   

19.
We report a study of the factors that affect reading in Spanish, a language with a transparent orthography. Our focus was on the influence of lexical semantic knowledge in phonological coding. This effect would be predicted to be minimal in Spanish, according to some accounts of semantic effects in reading. We asked 25 healthy adults to name 2,764 mono- and multisyllabic words. As is typical for psycholinguistics, variables capturing critical word attributes were highly intercorrelated. Therefore, we used principal components analysis (PCA) to derive orthogonalized predictors from raw variables. The PCA distinguished components relating to (1) word frequency, age of acquisition (AoA), and familiarity; (2) word AoA, imageability, and familiarity; (3) word length and orthographic neighborhood size; and (4) bigram type and token frequency. Linear mixed-effects analyses indicated significant effects on reading due to each PCA component. Our observations confirm that oral reading in Spanish proceeds through spelling–sound mappings involving lexical and sublexical units. Importantly, our observations distinguish between the effect of lexical frequency (the impact of the component relating to frequency, AoA, and familiarity) and the effect of semantic knowledge (the impact of the component relating to AoA, imageability, and familiarity). Semantic knowledge influences word naming even when all the words being read have regular spelling–sound mappings.  相似文献   

20.
Studies of implicit learning often examine peoples’ sensitivity to sequential structure. Computational accounts have evolved to reflect this bias. An experiment conducted by Neil and Higham [Neil, G. J., & Higham, P. A.(2012). Implicit learning of conjunctive rule sets: An alternative to artificial grammars. Consciousness and Cognition, 21, 1393–1400] points to limitations in the sequential approach. In the experiment, participants studied words selected according to a conjunctive rule. At test, participants discriminated rule-consistent from rule-violating words but could not verbalize the rule. Although the data elude explanation by sequential models, an exemplar model of implicit learning can explain them. To make the case, we simulate the full pattern of results by incorporating vector representations for the words used in the experiment, derived from the large-scale semantic space models LSA and BEAGLE, into an exemplar model of memory, MINERVA 2. We show that basic memory processes in a classic model of memory capture implicit learning of non-sequential rules, provided that stimuli are appropriately represented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号