首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Although the Internet provides a variety of news, it also can give confusion caused by personal subjective thoughts such as personal TV, blogs, and unproven news. The unproven news is written in a subjective direction with added personal opinions rather than objective content, so readers may acquire knowledge with the wrong outlook. In addition, fake news is being produced and the problem of social polarization is becoming serious. In the end, it is necessary to detect the fake news, but it is not easy to distinguish the truth of published news because of the lack of fake news distinction time compared to the speed of information sharing on the Internet and the diversity and strong subjectivity of news. Therefore, in this paper, the possibility of fake news is defined by using the reverse-tracking method of the articles which are posted on the Cognitive System. Finally, as the result, the detection rate is average 85%.  相似文献   

3.
Text classification involves deciding whether or not a document is about a given topic. It is an important problem in machine learning, because automated text classifiers have enormous potential for application in information retrieval systems. It is also an interesting problem for cognitive science, because it involves real world human decision making with complicated stimuli. This paper develops two models of human text document classification based on random walk and accumulator sequential sampling processes. The models are evaluated using data from an experiment where participants classify text documents presented one word at a time under task instructions that emphasize either speed or accuracy, and rate their confidence in their decisions. Fitting the random walk and accumulator models to these data shows that the accumulator provides a better account of the decisions made, and a “balance of evidence” measure provides the best account of confidence. Both models are also evaluated in the applied information retrieval context, by comparing their performance to established machine learning techniques on the standard Reuters‐21578 corpus. It is found that they are almost as accurate as the benchmarks, and make decisions much more quickly because they only need to examine a small proportion of the words in the document. In addition, the ability of the accumulator model to produce useful confidence measures is shown to have application in prioritizing the results of classification decisions.  相似文献   

4.
The relationship between language and memory was examined by testing accessibility of general knowledge across two languages in bilinguals. Mandarin-English speakers were asked questions such as “name a statue of someone standing with a raised arm while looking into the distance” and were more likely to name the Statue of Liberty when asked in English and the Statue of Mao when asked in Mandarin. Multivalent information (i.e., multiple possible answers to a question) and bivalent information (i.e., two possible answers to a question) were more susceptible to language dependency than univalent information (i.e., one possible answer to a question). Accuracy of retrieval showed language-dependent memory effects in both languages, while speed of retrieval showed language-dependent memory effects only in bilinguals’ more proficient language. These findings suggest that memory and language are tightly connected and that linguistic context at the time of learning may become integrated into memory content.  相似文献   

5.
In the present studies, we investigated inferences from an incompatibility statement. Starting with two propositions that cannot be true at the same time, these inferences consist of deducing the falsity of one from the truth of the other or deducing the truth of one from the falsity of the other. Inferences of this latter form are relevant to human reasoning since they are the formal equivalent of a discourse manipulation called the false dilemma fallacy, often used in politics and advertising in order to force a choice between two selected options. Based on research on content-related variability in conditional reasoning, we predicted that content would have an impact on how reasoners treat incompatibility inferences. Like conditional inferences, they present two invalid forms for which the logical response is one of uncertainty. We predicted that participants would endorse a smaller proportion of the invalid incompatibility inferences when more counterexamples are available. In Study 1, we found the predicted pattern using causal premises translated into incompatibility statements with many and few counterexamples. In Study 2A, we replicated the content effects found in Study 1, but with premises for which the incompatibility statement is a non-causal relation between classes. These results suggest that the tendency to fall into the false dilemma fallacy is modulated by the background knowledge of the reasoner. They also provide additional evidence on the link between semantic information retrieval and deduction.  相似文献   

6.
Content balancing is one of the most important issues in computerized classification testing. To adapt to variable-length forms, special treatments are needed to successfully control content constraints without knowledge of test length during the test. To this end, we propose the notions of ‘look-ahead’ and ‘step size’ to adaptively control content constraints in each item selection step. The step size gives a prediction of the number of items to be selected at the current stage, that is, how far we will look ahead. Two look-ahead content balancing (LA-CB) methods, one with a constant step size and another with an adaptive step size, are proposed as feasible solutions to balancing content areas in variable-length computerized classification testing. The proposed LA-CB methods are compared with conventional item selection methods in variable-length tests and are examined with different classification methods. Simulation results show that, integrated with heuristic item selection methods, the proposed LA-CB methods result in fewer constraint violations and can maintain higher classification accuracy. In addition, the LA-CB method with an adaptive step size outperforms that with a constant step size in content management. Furthermore, the LA-CB methods generate higher test efficiency while using the sequential probability ratio test classification method.  相似文献   

7.
In the following paper, we investigated the usefulness of future reference sentence patterns in the prediction of the unfolding of future events. To obtain such patterns we first collected sentences that have any reference to the future from newspapers and Web news. Based on this collection, we developed a novel method for automatic extraction of frequent patterns from such sentences. The extracted patterns, consisting of multilayer semantic information and morphological information, were implemented in the formation of a general model of linguistically expressed future. To fully assess the performance of the proposed method we performed a number of evaluation experiments. In the first experiment, we evaluated the automatic extraction of future reference sentence patterns with the proposed extraction algorithm. In the second set of experiments, we estimated the effectiveness of those patterns and applied them to automatically classify sentences into future referring and other. The final model was then tested for performance in retrieving a new set of future reference sentences from a large news corpus. The obtained results confirmed that the proposed method outperformed state-of-the-art method in fully automatic retrieval of future reference sentences. Lastly, we applied the method in practice to confirm its usefulness in two tasks. The first is to support human readers in the everyday prediction of unfolding future events. In the second task, we developed a fully automatic prototype method for future prediction and tested its performance using the tasks included in the official Future Prediction Competence Test. The results indicate that the prototype system outperforms natural human foreseeing capability.  相似文献   

8.
文本检索模型综述   总被引:2,自引:0,他引:2  
文本检索是信息检索一个重要的分支。随着互联网信息的迅速膨胀,如何检索到用户最需要的信息变得越来越关键。文本检索模型是文本检索中的核心技术,其性能直接影响到搜索引擎的检索质量。本文对当前的经典检索模型及其研究进展进行介绍,并分析各个模型之间的优缺点。  相似文献   

9.
Dennis S 《Cognitive Science》2005,29(2):145-193
The syntagmatic paradigmatic model is a distributed, memory-based account of verbal processing. Built on a Bayesian interpretation of string edit theory, it characterizes the control of verbal cognition as the retrieval of sets of syntagmatic and paradigmatic constraints from sequential and relational long-term memory and the resolution of these constraints in working memory. Lexical information is extracted directly from text using a version of the expectation maximization algorithm. In this article, the model is described and then illustrated on a number of phenomena, including sentence processing, semantic categorization and rating, short-term serial recall, and analogical and logical inference. Subsequently, the model is used to answer questions about a corpus of tennis news articles taken from the Internet. The model's success demonstrates that it is possible to extract propositional information from naturally occurring text without employing a grammar, defining a set of heuristics, or specifying a priori a set of semantic roles.  相似文献   

10.
Localizing content in neural networks provides a bridge to understanding the way in which the brain stores and processes information. In this paper, I propose the existence of polytopes in the state space of the hidden layer of feedforward neural networks as vehicles of content. I analyze these geometrical structures from an information-theoretic point of view, invoking mutual information to help define the content stored within them. I establish how this proposal addresses the problem of misclassification and provide a novel solution to the disjunction problem, which hinges on the precise nature of the causal-informational framework for content advocated herein.  相似文献   

11.
This study investigated the effect of first-person and third-person perceptions of web site information. Responses from a telephone survey of 226 participants in a stratified random sample indicated that (1) most participants had higher evaluations for television news than for news received on the Internet; (2) a third-person effect was present in that most respondents generally thought that other people found the Internet easier to use than they did, and that other people were more likely to believe Internet information and trust the sources of Internet information than they would. Also, (3), evaluations of information on a particular web site could be increased by providing links to other web sites on the same topic. Perhaps links to other web sites may serve as either a "referencing" function or a social confirmation function to increase evaluations of web site information.  相似文献   

12.
Dealing with COVID-19 and with the preventative measures that have been taken to mitigate the transmission of the virus causing the pandemic has posed a great challenge to the population. While psychologists have expertise with regard to preventive behavior change and to dealing with the mental health impact of measures, their expertise needs to be effectively communicated to the public. Mass media play a critical role in times of crisis, in many cases being the only source of information. While most research focuses on the importance of information content as a factor affecting psychological responses to a collective traumatic event, the way information is framed in the media is likely to influence the way health professionals are perceived as trustworthy. This study aimed to analyze the media framing of information from psychology during the COVID-19 pandemic in six countries from America and Europe, identifying the most recurrent topics in the news (n news items = 541) related to psychology and mental health. In all six countries the media address the psychological needs of the population, which vary depending on the imposed restrictions. The news content is influenced by the scientific sources used by the media. While the most prevalent topics focus on psychological risk and the need to seek mental health care, the least prevalent topics relate to counseling and behavioral guidelines for managing the psychological consequences of the pandemic. The study findings provide insight into how psychological knowledge contributes to the understanding and mitigation of COVID-19 consequences in different countries and identified fields where psychologists were consulted to respond to a health emergency. They also show a preference to consult other experts when searching for contextual or more macro-social explanations of critical situation.  相似文献   

13.
Social media has become a part of our day-to-day life and has become one of the significant sources of information. Most of the information available on social media is in the form of images. This has given rise to fake news event distribution, which is misinforming the users. Hence, to tackle this problem, we propose a model which is concerned with the veracity analysis of information on various social media platforms available in the form of images. It involves an algorithm which validates the veracity of image text by exploring it on web and then checking the credibility of the top 15 Google search results by subsequently calculating the reality parameter (Rp), which if exceeds a threshold value, an event is classified as real else fake. In order to test the performance of our proposed approach, we compute the recognition accuracy, and the highest accuracy is compared with similar state-of-the-art models to demonstrate the superior performance of our approach.  相似文献   

14.
The importance of image steganography is unquestionable in the field of secure multimedia communication. Imperceptibility and high payload capacity are some of the crucial parts of any mode of steganography. The proposed work is an attempt to modify the edge-based image steganography which provides higher payload capacity and imperceptibility by making use of machine learning techniques. The approach uses an adaptive embedding process over Dual-Tree Complex Wavelet Transform (DT-CWT) subband coefficients. Machine learning based optimization techniques are employed here to embed the secret data over optimal cover-image-blocks with minimal retrieval error. The embedding process will create a unique secret key which is imperative for the retrieval of data and need to be transmitted to the receiver side via a secure channel. This enhances the security concerns and avoids data hacking by intruders. The algorithm performance is evaluated with standard benchmark parameters like PSNR, SSIM, CF, Retrieval error, BPP and Histogram. The results of the proposed method show the stego-image with PSNR above 50 dB even with a dense embedding of up to 7.87 BPP. This clearly indicates that the proposed work surpasses the state-of-the-art image steganographic systems significantly.  相似文献   

15.
Representationalists currently cannot explain counter-examples that involve indeterminate perceptual content, but a double content (DC) view is more promising. Four related cases of perceptual imprecision are used to outline the DC view, which also applies to imprecise photographic content. Next, inadequacies in the more standard single content (SC) view are demonstrated. The results are then generalized so as to apply to the content of any kinds of non-conventional representation. The paper continues with evidence that a DC account provides a moderate rather than extreme realist account of perception, and it concludes with an initial analysis of the failure of nomic covariance accounts of information in indeterminacy cases.
John DilworthEmail:
  相似文献   

16.
L. V. Jones and J. W. Tukey (2000) pointed out that the usual 2-sided, equal-tails null hypothesis test at level alpha can be reinterpreted as simultaneous tests of 2 directional inequality hypotheses, each at level alpha/2, and that the maximum probability of a Type I error is alpha/2 if the truth of the null hypothesis is considered impossible. This article points out that in multiple testing with familywise error rate controlled at alpha, the directional error rate (assuming all null hypotheses are false) is greater than alpha/2 and can be arbitrarily close to alpha. Single-step, step-down, and step-up procedures are analyzed, and other error rates, including the false discovery rate, are discussed. Implications for confidence interval estimation and hypothesis testing practices are considered.  相似文献   

17.
The illusion of truth is traditionally described as the increase in perceived validity of statements when they are repeated (Hasher, Goldstein, & Toppino, 1977). However, subsequent work has demonstrated that the effect can arise due to the increased familiarity or fluency afforded by repetition and not necessarily to repetition per se. We examine the case of information retrieved from memory. Recently experienced information is expected to be subsequently reexperienced as more fluent and familiar than novel information (Jacoby, 1983; Jacoby & Dallas, 1981). Therefore, the possibility exists that information retrieved from memory, because it is subjectively re-experienced at retrieval, would be more fluent or familiar than when it was first learned and would thus lead to an increase in perceived validity. Using a method to indirectly poll the perceived truth of factual statements, our experiment demonstrated that information retrieved from memory does indeed give rise to an illusion of truth. The effect was larger than when statements were explicitly repeated twice and was of comparable size to when statements were repeated 4 times. We conclude that memory retrieval is a powerful method for increasing the perceived validity of statements (and subsequent illusion of truth) and that the illusion of truth is a robust effect that can be observed even without directly polling the factual statements in question.  相似文献   

18.
The paper presents DLV+, a Disjunctive Logic Programming (DLP) system with object-oriented constructs, including classes, objects, (multiple) inheritance, and types. DLV+ is built on top of DLV (a state-of-the art DLP system), and provides a graphical user interface that allows one to specify, update, browse, query, and reason on knowledge bases. Two strong points of the system are the powerful type-checking mechanism and the advanced interface for visual querying.DLV+ is already used for the development of knowledge based applications for information extraction and text classification.  相似文献   

19.
It has been proposed that recognition decisions are based on contextual retrieval of specific trace information, in addition to an assessment of item strength. The retrieval component is maximal after a single presentation, whereas the strength component increases with multiple repetition. We report that unilateral anterior temporal lobectomy (ATL) in the language dominant (left) hemisphere impairs initial recognition accuracy without affecting the rate at which repetition improves performance. The implication that the temporal lobe contributes to retrieval rather than strength during recognition is supported by simultaneous event-related potential (ERP) recordings. In normal subjects, the large ERP difference between repeated and nonrepeated words does not increase with increasing study and is associated with contextual integration in other tasks. Thus, the lack of a repetition-induced ERP difference after left-ATL reported here provides converging evidence for a critical role of the temporal lobe in contextual retrieval during recognition.  相似文献   

20.
对于文本信息检索,用户都希望从被检索出来的前N篇文章中得到更多的相关信息。本文介绍一个基于文档重排列的中文信息检索系统。为了通过重排列初检索文本来提高检索结果的精确率,该系统按照初检索结果中前100个排列文档中的关键词的分布对结果中的所有1000个文档进行重排列。实验中使用NTCIR-3正式的中文测试数据作为测试集,结果表明,该系统对中文文本检索精确率的提高取得一定的效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号