首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
We explored methods of using latent semantic analysis (LSA) to identify reading strategies in students’ self-explanations that are collected as part of a Web-based reading trainer. In this study, college students self-explained scientific texts, one sentence at a time. LSA was used to measure the similarity between the self-explanations andsemantic benchmarks (groups of words and sentences that together represent reading strategies). Three types of semantic benchmarks were compared: content words, exemplars, and strategies. Discriminant analyses were used to classify global and specific reading strategies using the LSA cosines. All benchmarks contributed to the classification of general reading strategies, but the exemplars did the best in distinguishing subtle semantic differences between reading strategies. Pragmatic and theoretical concerns of using LSA are discussed.  相似文献   

2.
We tested a computer-based procedure for assessing reader strategies that was based on verbal protocols that utilized latent semantic analysis (LSA). Students were given self-explanation—reading training (SERT), which teaches strategies that facilitate self-explanation during reading, such as elaboration based on world knowledge and bridging between text sentences. During a computerized version of SERT practice, students read texts and typed self-explanations into a computer after each sentence. The use of SERT strategies during this practice was assessed by determining the extent to which students used the information in the current sentence versus the prior text or world knowledge in their self-explanations. This assessment was made on the basis of human judgments and LSA. Both human judgments and LSA were remarkably similar and indicated that students who were not complying with SERT tended to paraphrase the text sentences, whereas students who were compliant with SERT tended to explain the sentences in terms of what they knew about the world and of information provided in the prior text context. The similarity between human judgments and LSA indicates that LSA will be useful in accounting for reading strategies in a Web-based version of SERT.  相似文献   

3.
The effectiveness of a domain-specific latent semantic analysis (LSA) in assessing reading strategies was examined. Students were given self-explanation reading training (SERT) and asked to think aloud after each sentence in a science text. Novice and expert human raters and two LSA spaces (general reading, science) rated the similarity of each think-aloud protocol to benchmarks representing three different reading strategies (minimal, local, and global). The science LSA space correlated highly with human judgments, and more highly than did the general reading space. Also, cosines from the science LSA spaces can distinguish between different levels of semantic similarity, but may have trouble in distinguishing local processing protocols. Thus, a domain-specific LSA space is advantageous regardless of the size of the space. The results are discussedin the context of applying the science LSA to a computer-based version of SERT that gives online feedback based on LSA cosines.  相似文献   

4.
Latent semantic analysis (LSA) is a statistical technique for representing word meaning that has been widely used for making semantic similarity judgments between words, sentences, and documents. In order to perform an LSA analysis, an LSA space is created in a two-stage procedure, involving the construction of a word frequency matrix and the dimensionality reduction of that matrix through singular value decomposition (SVD). This article presents LANSE, an SVD algorithm specifically designed for LSA, which allows extremely large matrices to be processed using off-the-shelf computer hardware.  相似文献   

5.
In the present study, we tested a computer-based procedure for assessing very concise summaries (50 words long) of two types of text (narrative and expository) using latent semantic analysis (LSA) in comparison with the judgments of four human experts. LSA was used to estimate semantic similarity using six different methods: four holistic (summary-text, summary-summaries, summary-expert summaries, and pregraded-ungraded summary) and two componential (summary-sentence text and summary-main sentence text). A total of 390 Spanish middle and high school students (14–16 years old) and six experts read a narrative or expository text and later summarized it. The results support the viability of developing a computerized assessment tool using human judgments and LSA, although the correlation between human judgments and LSA was higher in the narrative text than in the expository, and LSA correlated more with human content ratings than with human coherence ratings. Finally, the holistic methods were found to be more reliable than the componential methods analyzed in this study.  相似文献   

6.
Latent semantic analysis (LSA) is a statistical model of word usage that permits comparisons of semantic similarity between pieces of textual information. This paper summarizes three experiments that illustrate how LSA may be used in text-based research. Two experiments describe methods for analyzing a subject’s essay for determining from what text a subject learned the information and for grading the quality of information cited in the essay. The third experiment describes using LSA to measure the coherence and comprehensibility of texts.  相似文献   

7.
Meanings of words facilitate false acceptance as well as correct rejection of lures in recognition memory tests, depending on the experimental context. This suggests that semantic representations are both directly and indirectly (i.e., mediated by perceptual representations) used in remembering. Studies using memory conjunction errors (MCEs) paradigms, in which the lures consist of component parts of studied words, have reported semantic facilitation of rejection of the lures. However, attending to components of the lures could potentially cause this. Therefore, we investigated whether semantic overlap of lures facilitates MCEs using Japanese Kanji words in which a whole-word image is more concerned in reading. Experiments demonstrated semantic facilitation of MCEs in a delayed recognition test (Experiment 1), and in immediate recognition tests in which participants were prevented from using phonological or orthographic representations (Experiment 2), and the salient effect on individuals with high semantic memory capacities (Experiment 3). Additionally, analysis of the receiver operating characteristic suggested that this effect is attributed to familiarity-based memory judgement and phantom recollection. These findings indicate that semantic representations can be directly used in remembering, even when perceptual representations of studied words are available.  相似文献   

8.
Latent semantic analysis (LSA) and transitional probability (TP), two computational methods used to reflect lexical semantic representation from large text corpora, were employed to examine the effects of word predictability on Chinese reading. Participants' eye movements were monitored, and the influence of word complexity (number of strokes), word frequency, and word predictability on different eye movement measures (first-fixation duration, gaze duration, and total time) were examined. We found influences of TP on first-fixation duration and gaze duration and of LSA on total time. The results suggest that TP reflects an early stage of lexical processing while LSA reflects a later stage.  相似文献   

9.
Latent semantic analysis (LSA) is a model of knowledge representation for words. It works by applying dimension reduction to local co-occurrence data from a large collection of documents after performing singular value decomposition on it. When the reduction is applied, the system forms condensed representations for the words that incorporate higher order associations. The higher order associations are primarily responsible for any semantic similarity between words in LSA. In this article, a memory model is described that creates semantic representations for words that are similar in form to those created by LSA. However, instead of applying dimension reduction, the model builds the representations by using a retrieval mechanism from a well-known account of episodic memory.  相似文献   

10.
Latent semantic analysis (LSA) is a computational model of human knowledge representation that approximates semantic relatedness judgments. Two issues are discussed that researchers must attend to when evaluating the utility of LSA for predicting psychological phenomena. First, the role of semantic relatedness in the psychological process of interest must be understood. LSA indices of similarity should then be derived from this theoretical understanding. Second, the knowledge base (semantic space) from which similarity indices are generated must contain “knowledge” that is appropriate to the task at hand. Proposed solutions are illustrated with data from an experiment in which LSA-based indices were generated from theoretical analysis of the processes involved in understanding two conflicting accounts of a historical event. These indices predict the complexity of subsequent student reasoning about the event, as well as hand-coded predictions generated from think-aloud protocols collected when students were reading the accounts of the event.  相似文献   

11.
In distributional semantics models (DSMs) such as latent semantic analysis (LSA), words are represented as vectors in a high-dimensional vector space. This allows for computing word similarities as the cosine of the angle between two such vectors. In two experiments, we investigated whether LSA cosine similarities predict priming effects, in that higher cosine similarities are associated with shorter reaction times (RTs). Critically, we applied a pseudo-random procedure in generating the item material to ensure that we directly manipulated LSA cosines as an independent variable. We employed two lexical priming experiments with lexical decision tasks (LDTs). In Experiment 1 we presented participants with 200 different prime words, each paired with one unique target. We found a significant effect of cosine similarities on RTs. The same was true for Experiment 2, where we reversed the prime-target order (primes of Experiment 1 were targets in Experiment 2, and vice versa). The results of these experiments confirm that LSA cosine similarities can predict priming effects, supporting the view that they are psychologically relevant. The present study thereby provides evidence for qualifying LSA cosine similarities not only as a linguistic measure, but also as a cognitive similarity measure. However, it is also shown that other DSMs can outperform LSA as a predictor of priming effects.  相似文献   

12.
13.
Within the connectionist triangle model of reading aloud, interaction between semantic and phonological representations occurs for all words but is particularly important for correct pronunciation of lower frequency exception words. This framework therefore predicts that (a) semantic dementia, which compromises semantic knowledge, should be accompanied by surface dyslexia, a frequency-modulated deficit in exception word reading, and (b) there should be a significant relationship between the severity of semantic degradation and the severity of surface dyslexia. The authors evaluated these claims with reference to 100 observations of reading data from 51 cases of semantic dementia. Surface dyslexia was rampant, and a simple composite semantic measure accounted for half of the variance in low-frequency exception word reading. Although in 3 cases initial testing revealed a moderate semantic impairment but normal exception word reading, all of these became surface dyslexic as their semantic knowledge deteriorated further. The connectionist account attributes such cases to premorbid individual variation in semantic reliance for accurate exception word reading. These results provide a striking demonstration of the association between semantic dementia and surface dyslexia, a phenomenon that the authors have dubbed SD-squared.  相似文献   

14.
In this study, we compared four expert graders with latent semantic analysis (LSA) to assess short summaries of an expository text. As is well known, there are technical difficulties for LSA to establish a good semantic representation when analyzing short texts. In order to improve the reliability of LSA relative to human graders, we analyzed three new algorithms by two holistic methods used in previous research (León, Olmos, Escudero, Cañas, &; Salmerón, 2006). The three new algorithms were (1) the semantic common network algorithm, an adaptation of an algorithm proposed by W. Kintsch (2001, 2002) with respect to LSA as a dynamic model of semantic representation; (2) a best-dimension reduction measure of the latent semantic space, selecting those dimensions that best contribute to improving the LSA assessment of summaries (Hu, Cai, Wiemer-Hastings, Graesser, &; McNamara, 2007); and (3) the Euclidean distance measure, used by Rehder et al. (1998), which incorporates at the same time vector length and the cosine measures. A total of 192 Spanish middle-grade students and 6 experts took part in this study. They read an expository text and produced a short summary. Results showed significantly higher reliability of LSA as a computerized assessment tool for expository text when it used a best-dimension algorithm rather than a standard LSA algorithm. The semantic common network algorithm also showed promising results.  相似文献   

15.
The hypothesis that patients with Alzheimer's disease (AD) have a disturbance in semantic processing was tested using a new lexical-priming task, threshold oral reading. Healthy elderly controls showed significant effects of priming for word pairs that are associatively related (words that reliably co-occur in word association tests) and for word pairs that are semantically related (high-frequency exemplars that belong to the same superordinate category but are not high-frequency associates). AD patients showed effects of priming for associatively related words but not for word pairs that are related only by shared semantic features. These results are consistent with the hypothesis that semantic processing is impaired in AD and suggest that independent networks of relationships among words and among concepts in semantic memory may be differentially disrupted with various forms of brain damage.  相似文献   

16.
Rapid word identification in pure alexia is lexical but not semantic   总被引:1,自引:0,他引:1  
Following the notion that patients with pure alexia have access to two distinct reading strategies-letter-by-letter reading and semantic reading-a training program was devised to facilitate reading via semantics in a patient with pure alexia. Training utilized brief stimulus presentations and required category judgments rather than explicit word identification. The training was successful for trained words, but generalized poorly to untrained words. Additional studies involving oral reading of nouns and of functors also resulted in improved reading of trained words. Pseudowords could not be trained to criterion. The results suggest that improved reading can be achieved in pure alexia by pairing rapidly presented words with feedback. Focusing on semantic processing is not essential to this process. It is proposed that the training strengthens connections between the output of visual processing and preexisting orthographic representations.  相似文献   

17.
We report the performance of a neurologically impaired patient, JJ, whose oral reading of words exceeded his naming and comprehension performance for the same words--a pattern of performance that has been previously presented as evidence for "direct, nonsemantic, lexical" routes to output in reading. However, detailed analyses of JJ's reading and comprehension revealed two results that do not follow directly from the "direct route" hypothesis: (1) He accurately read aloud all orthophonologically regular words and just those irregular words for which he demonstrated some comprehension (as indicated by correct responses or within-category semantic errors in naming and comprehension tasks); and (2) his reading errors on words that were not comprehended at all (but were recognized as words) were phonologically plausible (e.g., soot read as "suit"). We account for these results by proposing that preserved sublexical mechanisms for converting print to sound, together with partially preserved semantic information, serve to mediate the activation of representations in the phonological output lexicon in the task of reading aloud. We present similar arguments for postulating an interaction between sublexical mechanisms and lexical output components of the spelling process.  相似文献   

18.
We report a study of the factors that affect reading in Spanish, a language with a transparent orthography. Our focus was on the influence of lexical semantic knowledge in phonological coding. This effect would be predicted to be minimal in Spanish, according to some accounts of semantic effects in reading. We asked 25 healthy adults to name 2,764 mono- and multisyllabic words. As is typical for psycholinguistics, variables capturing critical word attributes were highly intercorrelated. Therefore, we used principal components analysis (PCA) to derive orthogonalized predictors from raw variables. The PCA distinguished components relating to (1) word frequency, age of acquisition (AoA), and familiarity; (2) word AoA, imageability, and familiarity; (3) word length and orthographic neighborhood size; and (4) bigram type and token frequency. Linear mixed-effects analyses indicated significant effects on reading due to each PCA component. Our observations confirm that oral reading in Spanish proceeds through spelling–sound mappings involving lexical and sublexical units. Importantly, our observations distinguish between the effect of lexical frequency (the impact of the component relating to frequency, AoA, and familiarity) and the effect of semantic knowledge (the impact of the component relating to AoA, imageability, and familiarity). Semantic knowledge influences word naming even when all the words being read have regular spelling–sound mappings.  相似文献   

19.
A categorical judgment task was utilized to investigate the relationships between word recognition skills and reading achievement at several grade levels. In the first experiment skilled and unskilled readers from Grades 2, 4, and 6 made cognitive decisions about pairs of words using either graphemic, lexical, or semantic information. In Experiment 2 skilled, average, and unskilled readers from Grades 1, 3, and 5 made semantic decisions about word or picture pairs. The speed and accuracy of word encoding, lexical access, and semantic memory access processes varied as a function of reading ability. These results suggest that inefficient word recognition skills can contribute to reading deficiencies as can deficiencies in semantic memory organization.  相似文献   

20.
A divided visual field (DVF) experiment examined the semantic processing strategies employed by the cerebral hemispheres to determine if strategies observed with written word stimuli generalize to other media for communicating semantic information. We employed picture stimuli and vary the degree of semantic relatedness between the picture pairs. Participants made an on-line semantic relatedness judgment in response to sequentially presented pictures. We found that when pictures are presented to the right hemisphere responses are generally more accurate than the left hemisphere for semantic relatedness judgments for picture pairs. Furthermore, consistent with earlier DVF studies employing words, we conclude that the RH is better at accessing or maintaining access to information that has a weak or more remote semantic relationship. We also found evidence of faster access for pictures presented to the LH in the strongly-related condition. Overall, these results are consistent with earlier DVF word studies that argue that the cerebral hemispheres each play an important and separable role during semantic retrieval.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号