首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   110篇
  免费   12篇
  国内免费   3篇
  125篇
  2023年   2篇
  2022年   2篇
  2021年   2篇
  2020年   4篇
  2019年   6篇
  2018年   2篇
  2017年   9篇
  2016年   2篇
  2015年   3篇
  2014年   1篇
  2013年   9篇
  2012年   1篇
  2010年   1篇
  2009年   1篇
  2008年   2篇
  2007年   8篇
  2006年   7篇
  2005年   6篇
  2004年   2篇
  2003年   1篇
  2002年   3篇
  2001年   2篇
  2000年   1篇
  1999年   2篇
  1998年   3篇
  1997年   2篇
  1996年   6篇
  1995年   2篇
  1994年   3篇
  1993年   2篇
  1992年   5篇
  1991年   1篇
  1990年   4篇
  1989年   2篇
  1988年   1篇
  1987年   2篇
  1986年   1篇
  1984年   2篇
  1983年   1篇
  1982年   2篇
  1981年   3篇
  1980年   2篇
  1978年   2篇
排序方式: 共有125条查询结果,搜索用时 15 毫秒
61.
The level of moral development and moral intensity in cognitive psychology will not only affect the ethical behavior of accountants, but also have a direct impact on the quality and level of accounting work. Therefore, in this paper, the ethical behavior of accountants was analyzed from the perspective of cognitive psychology. Computer-aided data mining techniques were introduced, and government accounting risk assessment management of financial accountants was studied. In this paper, the principle of cognitive psychology to measure the ethical level of accountants was first described. The predicament of moral judgments was analyzed and an optimization plan to improve the ethical intention of accountants was proposed. Support Vector Machine classification technology in data mining was studied to explore how to conduct effective and reliable evaluation, so as to provide a scientific basis for decision-making in improving accounting management. After the simulation experiment, it is proved that continuously improving the ethical standards of accountants and strengthening the forecast of accounting risks can continue to optimize the accounting office management.  相似文献   
62.
This paper addresses a common challenge with computational cognitive models: identifying parameter values that are both theoretically plausible and generate predictions that match well with empirical data. While computational models can offer deep explanations of cognition, they are computationally complex and often out of reach of traditional parameter fitting methods. Weak methodology may lead to premature rejection of valid models or to acceptance of models that might otherwise be falsified. Mathematically robust fitting methods are, therefore, essential to the progress of computational modeling in cognitive science. In this article, we investigate the capability and role of modern fitting methods—including Bayesian optimization and approximate Bayesian computation—and contrast them to some more commonly used methods: grid search and Nelder–Mead optimization. Our investigation consists of a reanalysis of the fitting of two previous computational models: an Adaptive Control of Thought—Rational model of skill acquisition and a computational rationality model of visual search. The results contrast the efficiency and informativeness of the methods. A key advantage of the Bayesian methods is the ability to estimate the uncertainty of fitted parameter values. We conclude that approximate Bayesian computation is (a) efficient, (b) informative, and (c) offers a path to reproducible results.  相似文献   
63.
Statistical tests involving mean directions have in the past been limited to two- and three-dimensional settings, perhaps owing to their primary applications to such fields as geology, meteorology and related earth sciences. In the study of interactive multicriterion optimization it becomes necessary to compare gradient directions obtained from decision makers by two or more methods. Typically these direction vectors are in a higher-dimensional space. This paper provides a general procedure based on Householder transformations which is potentially suitable for any finite dimension. An illustration and comparison of the method are provided.  相似文献   
64.
In educational practice, a test assembly problem is formulated as a system of inequalities induced by test specifications. Each solution to the system is a test, represented by a 0–1 vector, where each element corresponds to an item included (1) or not included (0) into the test. Therefore, the size of a 0–1 vector equals the number of items n in a given item pool. All solutions form a feasible set—a subset of 2 n vertices of the unit cube in an n-dimensional vector space. Test assembly is uniform if each test from the feasible set has an equal probability of being assembled. This paper demonstrates several important applications of uniform test assembly for educational practice. Based on Slepian’s inequality, a binary program was analytically studied as a candidate for uniform test assembly. The results of this study establish a connection between combinatorial optimization and probability inequalities. They identify combinatorial properties of the feasible set that control the uniformity of the binary programming test assembly. Computer experiments illustrating the concepts of this paper are presented.  相似文献   
65.
耐多药结核病是结核病治疗中的一个难点,由于耐药成因和耐药种类的多样性决定了治疗应该个体化,对耐多药肺结核的最优化治疗模式的应用和价值进行了思考和总结,这对结核病的治疗有极为重要的意义。  相似文献   
66.
One of the major reasons for the success of answer set programmingin recent years was the shift from a theorem proving to a constraintprogramming view: problems are represented such that stablemodels, respectively answer sets, rather than theorems correspondto solutions. This shift in perspective proved extremely fruitfulin many areas. We believe that going one step further from a"hard" to a "soft" constraint programming paradigm, or, in otherwords, to a paradigm of qualitative optimization, will proveequally fruitful. In this paper we try to support this claimby showing that several generic problems in logic based problemsolving can be understood as qualitative optimization problems,and that these problems have simple and elegant formulationsgiven adequate optimization constructs in the knowledge representationlanguage.  相似文献   
67.
College undergraduates were given repeated opportunities to choose between a fixed-ratio and a progressive-ratio schedule of reinforcement. Completions of a progressive-ratio schedule produced points (exchangeable for money) and incremented that response requirement by 20 responses with each consecutive choice. In the reset condition, completion of a fixed ratio produced the same number of points and also reset the progressive ratio back to its initial value. In the no-reset condition, the progressive ratio continued to increase by increments of 20 throughout the session with each successive selection of this schedule, irrespective of fixed-ratio choices. Subjects' schedule choices were sensitive to parametric manipulations of the size of the fixed-ratio schedule and were consistent with predictions made on the basis of minimizing the number of responses emitted per point earned, which is a principle of most optimality theories. Also, the present results suggest that if data from human performances are to be compared with results for other species, humans should be exposed to schedules of reinforcement for long periods of time, as is commonly done with nonhuman subjects.  相似文献   
68.
An Erratum has been published for this article in Journal of Multi‐Criteria Decision Analysis 10(5) 2001, 285. This paper proposes a model for the generation of daily work duties of airside crew (being bus drivers) at the Hong Kong International Airport. The results can be adopted as a good crew schedule, in the sense that it is both feasible, satisfying requirements of various work conditions, and ‘optimal’ in minimizing overtime shifts. It is formulated as a goal programme, specifically designed to cater for the manpower planning issues to handle frequent changes of flight schedules by flexibility in work patterns of driver duties. Illustrative results from an actual case study are given. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   
69.
Cluster differences scaling is a method for partitioning a set of objects into classes and simultaneously finding a low-dimensional spatial representation ofK cluster points, to model a given square table of dissimilarities amongn stimuli or objects. The least squares loss function of cluster differences scaling, originally defined only on the residuals of pairs of objects that are allocated to different clusters, is extended with a loss component for pairs that are allocated to the same cluster. It is shown that this extension makes the method equivalent to multidimensional scaling with cluster constraints on the coordinates. A decomposition of the sum of squared dissimilarities into contributions from several sources of variation is described, including the appropriate degrees of freedom for each source. After developing a convergent algorithm for fitting the cluster differences model, it is argued that the individual objects and the cluster locations can be jointly displayed in a configuration obtained as a by-product of the optimization. Finally, the paper introduces a fuzzy version of the loss function, which can be used in a successive approximation strategy for avoiding local minima. A simulation study demonstrates that this strategy significantly outperforms two other well-known initialization strategies, and that it has a success rate of 92 out of 100 in attaining the global minimum.  相似文献   
70.
Learning hierarchy research has been characterized by the use of ad hoc statistical procedures to determine the validity of postulated hierarchical connections. The two most substantial attempts to legitimize the procedure are due to White & Clark and Dayton & Macready, although both of these methods suffer from serious inadequacies. Data from a number of sources is analyzed using a restricted maximum likelihood estimation procedure and the results are compared with those obtained using the method suggested by Dayton and Macready. Improved estimates are evidenced by an increase in the computed value of the log likelihood function.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号