首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   253篇
  免费   17篇
  国内免费   6篇
  276篇
  2025年   1篇
  2024年   2篇
  2023年   4篇
  2022年   5篇
  2021年   4篇
  2020年   12篇
  2019年   11篇
  2018年   12篇
  2017年   17篇
  2016年   13篇
  2015年   6篇
  2014年   13篇
  2013年   44篇
  2012年   12篇
  2011年   9篇
  2010年   11篇
  2009年   16篇
  2008年   20篇
  2007年   14篇
  2006年   8篇
  2005年   7篇
  2004年   9篇
  2003年   10篇
  2002年   4篇
  1999年   3篇
  1995年   1篇
  1992年   1篇
  1988年   1篇
  1984年   1篇
  1983年   1篇
  1981年   1篇
  1979年   1篇
  1975年   2篇
排序方式: 共有276条查询结果,搜索用时 15 毫秒
191.
    
Sentiment Analysis is considered as an important research field in text mining, and is significant in recommendation systems and e-learning environments. This research proposes a new methodology of e-learning hybrid Recommendation System Based on Sentiment Analysis (RSBSA) by leveraging tailored Natural Language Processing (NLP) and Convolutional Neural Network (CNN) techniques, to recommend appropriate e-learning materials based on learner’s preferences. Integration is done on fine-grained sentiment analysis models, to classify text reviews of e-content posted on e-learning platform. Two enhanced language models based on ‘Continuous Bag of Word’ and ‘Skip-Gram’ are introduced. Moreover, three resilient language models based on the hybrid language techniques are developed to produce a superior vocabulary representation. These models were trained using various CNN models to predict ratings of resources from online reviews provided by learners. To accomplish this, a customizable dataset ‘ABHR-1′ is used, which is derived from e-content' reviews with corresponding ratings labeled [1–5]. The proposed models are evaluated and tested using ABHR-1 and two public datasets. According to the simulation results, Multiplication-Several-Channels-CNN model outperformed other models with an accuracy of 90.37 % for fine-grained sentiment classification on 5 discrete classes and the empirical results are compared.  相似文献   
192.
    
Distributional models of semantics learn word meanings from contextual co‐occurrence patterns across a large sample of natural language. Early models, such as LSA and HAL (Landauer & Dumais, 1997; Lund & Burgess, 1996), counted co‐occurrence events; later models, such as BEAGLE (Jones & Mewhort, 2007), replaced counting co‐occurrences with vector accumulation. All of these models learned from positive information only: Words that occur together within a context become related to each other. A recent class of distributional models, referred to as neural embedding models, are based on a prediction process embedded in the functioning of a neural network: Such models predict words that should surround a target word in a given context (e.g., word2vec; Mikolov, Sutskever, Chen, Corrado, & Dean, 2013). An error signal derived from the prediction is used to update each word's representation via backpropagation. However, another key difference in predictive models is their use of negative information in addition to positive information to develop a semantic representation. The models use negative examples to predict words that should not surround a word in a given context. As before, an error signal derived from the prediction prompts an update of the word's representation, a procedure referred to as negative sampling. Standard uses of word2vec recommend a greater or equal ratio of negative to positive sampling. The use of negative information in developing a representation of semantic information is often thought to be intimately associated with word2vec's prediction process. We assess the role of negative information in developing a semantic representation and show that its power does not reflect the use of a prediction mechanism. Finally, we show how negative information can be efficiently integrated into classic count‐based semantic models using parameter‐free analytical transformations.  相似文献   
193.
    
Sentiment analysis on social media such as Twitter has become a very important and challenging task. Due to the characteristics of such data—tweet length, spelling errors, abbreviations, and special characters—the sentiment analysis task in such an environment requires a non-traditional approach. Moreover, social media sentiment analysis is a fundamental problem with many interesting applications. Most current social media sentiment classification methods judge the sentiment polarity primarily according to textual content and neglect other information on these platforms. In this paper, we propose a neural network model that also incorporates user behavioral information within a given document (tweet). The neural network used in this paper is a Convolutional Neural Network (CNN). The system is evaluated on two datasets provided by the SemEval-2016 Workshop. The proposed model outperforms current baseline models (including Naive Bayes and Support Vector Machines), which shows that going beyond the content of a document (tweet) is beneficial in sentiment classification, because it provides the classifier with a deep understanding of the task.  相似文献   
194.
    
Spoken language based natural Human-Robot Interaction (HRI) requires robots to have the ability to understand spoken language, and extract the intention-related information from the working scenario. For grounding the intention-related object in the working environment, object affordance recognition could be a feasible way. To this end, we propose a dataset and a deep CNN based architecture to learn the human-centered object affordance. Furthermore, we present an affordance based multimodal fusion framework to realize intended object grasping according to the spoken instructions of human users. The proposed framework contains an intention semantics extraction module which is employed to extract the intention from spoken language, a deep Convolutional Neural Network (CNN) based object affordance recognition module which is applied to recognize human-centered object affordance, and a multimodal fusion module which is adopted to bridge the extracted intentions and the recognized object affordances. We also complete multiple intended object grasping experiments on a PR2 platform to validate the feasibility and practicability of the presented HRI framework.  相似文献   
195.
On 27 February 2010, Chile experienced one of the strongest earthquakes in recorded history. The study aimed to evaluate post-traumatic stress symptoms (PTSS) and post-traumatic growth (PTG) in children and adolescents 12 months (T1) and 24 months (T2) after the earthquake and tsunamis in Chile in 2010. Three hundred twenty-five children and adolescents (47.4% girls; 52.6% boys) between the ages of 10 and 16 years participated in the study. The instruments included the Revised Post-traumatic Growth Inventory for Children by Kilmer et al., the Childhood PTSD Scale by Foa et al. and the Rumination Scale for Children by Cryder et al., as well as a scale to assess the severity of the event and a sociodemographic questionnaire. The PTSS and PTG scores decreased at T2. In addition, the main predictors of PTSS and PTG were disruptive experiences, losses after the event and intrusive and deliberate rumination during the previous year. These results enhance understanding of factors related to PTG, improve the ability to predict PTSS and PTG in children and adolescents following natural disasters, and inform the design of intervention strategies to promote better mental health in those affected.  相似文献   
196.
    
The language we use over the course of conversation changes as we establish common ground and learn what our partner finds meaningful. Here we draw upon recent advances in natural language processing to provide a finer-grained characterization of the dynamics of this learning process. We release an open corpus (>15,000 utterances) of extended dyadic interactions in a classic repeated reference game task where pairs of participants had to coordinate on how to refer to initially difficult-to-describe tangram stimuli. We find that different pairs discover a wide variety of idiosyncratic but efficient and stable solutions to the problem of reference. Furthermore, these conventions are shaped by the communicative context: words that are more discriminative in the initial context (i.e., that are used for one target more than others) are more likely to persist through the final repetition. Finally, we find systematic structure in how a speaker's referring expressions become more efficient over time: Syntactic units drop out in clusters following positive feedback from the listener, eventually leaving short labels containing open-class parts of speech. These findings provide a higher resolution look at the quantitative dynamics of ad hoc convention formation and support further development of computational models of learning in communication.  相似文献   
197.
    
Forensic evidence often involves an evaluation of whether two impressions were made by the same source, such as whether a fingerprint from a crime scene has detail in agreement with an impression taken from a suspect. Human experts currently outperform computer‐based comparison systems, but the strength of the evidence exemplified by the observed detail in agreement must be evaluated against the possibility that some other individual may have created the crime scene impression. Therefore, the strongest evidence comes from features in agreement that are also not shared with other impressions from other individuals. We characterize the nature of human expertise by applying two extant metrics to the images used in a fingerprint recognition task and use eye gaze data from experts to both tune and validate the models. The Attention via Information Maximization (AIM) model (Bruce & Tsotsos, 2009) quantifies the rarity of regions in the fingerprints to determine diagnosticity for purposes of excluding alternative sources. The CoVar model (Karklin & Lewicki, 2009) captures relationships between low‐level features, mimicking properties of the early visual system. Both models produced classification and generalization performance in the 75%–80% range when classifying where experts tend to look. A validation study using regions identified by the AIM model as diagnostic demonstrates that human experts perform better when given regions of high diagnosticity. The computational nature of the metrics may help guard against wrongful convictions, as well as provide a quantitative measure of the strength of evidence in casework.  相似文献   
198.
本研究首先考察上下文预期是否能够影响快速场景识别,进而通过比较不同空间频率的场景信息来探究上下文预期在快速场景识别不同阶段的作用。共包括3个实验:实验1采用双眼竞争范式,考察对快速呈现场景刺激的主观选择是否受到上下文预期的影响;实验2采用词汇分类与快速场景识别的双任务范式,比较预期与非预期条件下快速场景识别绩效的差异;实验3分别以低频信息(实验3a)和高频信息(实验3b)作为实验材料,探讨预期效应的作用阶段。结果发现:上下文预期可以影响快速场景识别过程中的主观选择与反应绩效;对不同空间频率的场景信息,场景刺激与预期相一致时均可以提高辨别力,但预期对反应偏好的影响只发生在对高空间频率场景信息的加工阶段。因此,上下文预期在快速场景识别不同加工阶段的作用不同,快速场景识别需要结合不同空间频率信息加工的结果。  相似文献   
199.
通过改变目标数量、运动框架突变旋转角度探究不同场认知风格被试在多目标追踪任务中的表现。结果发现:(1)在任务难度较低(运动参考框架稳定,目标数量为3和4)和任务难度中等(运动参考框架突变向右旋转20?,目标数量为4)条件下,场独立型被试的多目标追踪表现均显著好于场依存型被试。在任务难度较高(运动框架稳定,目标数量为5以及运动参考框架突变向右旋转40?,目标数量为4)条件下,两组被试差异不显著。表明不同场认知风格被试追踪表现受任务难度影响;(2)随着目标数量由3至5增多,追踪负荷增大使被试的追踪成绩显著下降;(3)相比运动框架稳定,运动框架向右突变旋转20?和40?均显著削弱了被试的追踪表现。旋转角度变化破坏了场景连续性,影响了追踪表现。  相似文献   
200.
Deductive inference is usually regarded as being “tautological” or “analytical”: the information conveyed by the conclusion is contained in the information conveyed by the premises. This idea, however, clashes with the undecidability of first-order logic and with the (likely) intractability of Boolean logic. In this article, we address the problem both from the semantic and the proof-theoretical point of view. We propose a hierarchy of propositional logics that are all tractable (i.e. decidable in polynomial time), although by means of growing computational resources, and converge towards classical propositional logic. The underlying claim is that this hierarchy can be used to represent increasing levels of “depth” or “informativeness” of Boolean reasoning. Special attention is paid to the most basic logic in this hierarchy, the pure “intelim logic”, which satisfies all the requirements of a natural deduction system (allowing both introduction and elimination rules for each logical operator) while admitting of a feasible (quadratic) decision procedure. We argue that this logic is “analytic” in a particularly strict sense, in that it rules out any use of “virtual information”, which is chiefly responsible for the combinatorial explosion of standard classical systems. As a result, analyticity and tractability are reconciled and growing degrees of computational complexity are associated with the depth at which the use of virtual information is allowed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号