首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   42篇
  免费   2篇
  国内免费   1篇
  45篇
  2023年   2篇
  2019年   6篇
  2018年   3篇
  2017年   1篇
  2016年   2篇
  2015年   3篇
  2013年   3篇
  2012年   3篇
  2010年   1篇
  2009年   3篇
  2008年   2篇
  2004年   3篇
  2003年   1篇
  1997年   2篇
  1996年   2篇
  1993年   1篇
  1987年   2篇
  1983年   1篇
  1982年   1篇
  1980年   3篇
排序方式: 共有45条查询结果,搜索用时 15 毫秒
21.
A jackknife-like procedure is developed for producing standard errors of estimate in maximum likelihood factor analysis. Unlike earlier methods based on information theory, the procedure developed is computationally feasible on larger problems. Unlike earlier methods based on the jackknife, the present procedure is not plagued by the factor alignment problem, the Heywood case problem, or the necessity to jackknife by groups. Standard errors may be produced for rotated and unrotated loading estimates using either orthogonal or oblique rotation as well as for estimates of unique factor variances and common factor correlations. The total cost for larger problems is a small multiple of the square of the number of variables times the number of observations used in the analysis. Examples are given to demonstrate the feasibility of the method.The research done by R. I. Jennrich was supported in part by NSF Grant MCS 77-02121. The research done by D. B. Clarkson was supported in part by NSERC Grant A3109.  相似文献   
22.

医疗人工智能算法应用过程中存在诊疗活动自主性受限、诊疗过程透明性匮乏、医疗算法偏见性诱导以及安全性缺失风险。在风险防治过程中出现了传统监管缺陷、主体资格界定模糊、算法权力控制失衡、责任主体追溯难等现实挑战,迫切需要运用法律手段予以纾解。因此,应构筑医疗算法风险的全面防治体系,坚持医疗算法权力驯化的向善向美属性,增强医疗算 法权力对抗中主体权利的质效,厘清医疗算法责任归属中归责原则的困惑进而为医疗人工智能算法风险的防治补足法治力量。

  相似文献   
23.
A substantial amount of recent work in natural language generation has focused on the generation of 'one-shot' referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We test this hypothesis by eliciting referring expressions from human subjects and computing the similarity between the expressions elicited and the ones generated by algorithms. It turns out that the success of the IA depends substantially on the 'preference order' (PO) employed by the IA, particularly in complex domains. While some POs cause the IA to produce referring expressions that are very similar to expressions produced by human subjects, others cause the IA to perform worse than its main competitors; moreover, it turns out to be difficult to predict the success of a PO on the basis of existing psycholinguistic findings or frequencies in corpora. We also examine the computational complexity of the algorithms in question and argue that there are no compelling reasons for preferring the IA over some of its main competitors on these grounds. We conclude that future research on the generation of referring expressions should explore alternatives to the IA, focusing on algorithms, inspired by the Greedy Algorithm, which do not work with a fixed PO.  相似文献   
24.
Paul H. Carr 《Zygon》2004,39(4):933-940
Abstract Albert Einstein and Huston Smith reflect the old metaphor that chaos and randomness are bad. Scientists recently have discovered that many phenomena, from the fluctuations of the stock market to variations in our weather, have the same underlying order. Natural beauty from plants to snowflakes is described by fractal geometry; tree branching from trunks to twigs has the same fractal scaling as our lungs, from trachea to bronchi. Algorithms for drawing fractals have both randomness and global determinism. Fractal statistics is like picking a card from a stacked deck rather than from one that is shuffled to be truly random. The polarity of randomness (or freedom) and law characterizes the self‐creating natural world. Polarity is in consonance with Taoism and contemporary theologians such as Paul Tillich, Alfred North Whitehead, Gordon Kaufman, Philip Hefner, and Pierre Teilhard de Chardin. Joseph Ford's new metaphor is replacing the old: “God plays dice with the universe, but they're loaded dice.”  相似文献   
25.
Computer algorithms are increasingly being used to predict people's preferences and make recommendations. Although people frequently encounter these algorithms because they are cheap to scale, we do not know how they compare to human judgment. Here, we compare computer recommender systems to human recommenders in a domain that affords humans many advantages: predicting which jokes people will find funny. We find that recommender systems outperform humans, whether strangers, friends, or family. Yet people are averse to relying on these recommender systems. This aversion partly stems from the fact that people believe the human recommendation process is easier to understand. It is not enough for recommender systems to be accurate, they must also be understood.  相似文献   
26.
Conceptual Blending (CB) theory describes the cognitive mechanisms underlying the way humans process the emergence of new conceptual spaces by blending two input spaces. CB theory has been primarily used as a method for interpreting creative artefacts, while recently it has been utilised in the context of computational creativity for algorithmic invention of new concepts. Examples in the domain of music include the employment of CB interpretatively as a tool to explain musical semantic structures based on lyrics of songs or on the relations between body gestures and music structures. Recent work on generative applications of CB has shown that proper low-level representation of the input spaces allows the generation of consistent and sometimes surprising blends. However, blending high-level features (as discussed in the interpretative studies) of music explicitly, is hardly feasible with mere low-level representation of objects. Additionally, selecting features that are more salient in the context of two input spaces and relevant background knowledge and should, thus, be preserved and integrated in new interesting blends has not yet been tackled in a cognitively pertinent manner. The paper at hand proposes a novel approach to generating new material that allows blending high-level features by combining low-level structures, based on statistically computed salience values for each high-level feature extracted from data. The proposed framework is applied to a basic but, at the same time, complicated field of music, namely melodic generation. The examples presented herein allow an insightful examination of what the proposed approach does, revealing new possibilities and prospects.  相似文献   
27.
With rapid advancement in cellphones and intelligent in-vehicle technologies along with driver’s inclination to multitasking, crashes due to distracted driving had become a growing safety concern in our road network. Some previous studies attempted to detect distracted driving behaviors in real-time to mitigate their adverse consequences. However, these studies mainly focused on detecting either visual or cognitive distractions only, while most of the real-life distracting tasks involve driver’s visual, cognitive, and physical workload, simultaneously. Additionally, previous studies frequently used eye, head, or face tracking data, although current vehicles are not commonly equipped with technologies to acquire such data. Also those data are comparatively difficult to acquire in real-time during traffic monitoring operations. To address the above issues, this study focused on developing algorithms for detecting distraction tasks that involve simultaneous visual, cognitive, and physical workload using only vehicle dynamics data. Specifically, algorithms were developed to detect driving behaviors under two distraction tasks – texting and eating. Experiment was designed to include the two distracted driving scenarios and a control with multiple runs for each. A medium fidelity driving simulator was used for acquiring vehicle dynamics data for each scenario and each run. Several data mining techniques were explored in this study to investigate their performance in detecting distraction. Among them, the performance of two linear (linear discriminant analysis and logistic regression) and two nonlinear models (support vector machines and random forests) is reported in this article. Random forests algorithms had the best performance, which detected texting and eating distraction with an accuracy of 85.38% and 81.26%, respectively. This study may provide useful guidance to successful development and implementation of distracted driver detection algorithms in connected vehicle environment, as well as to auto manufacturers interested in integrating distraction detection systems in their vehicles.  相似文献   
28.
While standard procedures of causal reasoning as procedures analyzing causal Bayesian networks are custom-built for (non-deterministic) probabilistic structures, this paper introduces a Boolean procedure that uncovers deterministic causal structures. Contrary to existing Boolean methodologies, the procedure advanced here successfully analyzes structures of arbitrary complexity. It roughly involves three parts: first, deterministic dependencies are identified in the data; second, these dependencies are suitably minimalized in order to eliminate redundancies; and third, one or—in case of ambiguities—more than one causal structure is assigned to the minimalized deterministic dependencies.  相似文献   
29.
There has been considerable debate in the literature about the relative merits of information processing versus dynamical approaches to understanding cognitive processes. In this article, we explore the relationship between these two styles of explanation using a model agent evolved to solve a relational categorization task. Specifically, we separately analyze the operation of this agent using the mathematical tools of information theory and dynamical systems theory. Information‐theoretic analysis reveals how task‐relevant information flows through the system to be combined into a categorization decision. Dynamical analysis reveals the key geometrical and temporal interrelationships underlying the categorization decision. Finally, we propose a framework for directly relating these two different styles of explanation and discuss the possible implications of our analysis for some of the ongoing debates in cognitive science.  相似文献   
30.
This paper presents an optimized cuttlefish algorithm for feature selection based on the traditional cuttlefish algorithm, which can be used for diagnosis of Parkinson’s disease at its early stage. Parkinson is a central nervous system disorder, caused due to the loss of brain cells. Parkinson's disease is incurable and could eventually lead to death but medications can help to control symptoms and elongate the patient's life to some extent. The proposed model uses the traditional cuttlefish algorithm as a search strategy to ascertain the optimal subset of features. The decision tree and k-nearest neighbor classifier as a judgment on the selected features. The Parkinson speech with multiple types of sound recordings and Parkinson Handwriting sample’s datasets are used to evaluate the proposed model. The proposed algorithm can be used in predicting the Parkinson’s disease with an accuracy of approximately 94% and help individual to have proper treatment at early stage. The experimental result reveals that the proposed bio-inspired algorithm finds an optimal subset of features, maximizing the accuracy, minimizing number of features selected and is more stable.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号