首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   188篇
  免费   10篇
  国内免费   9篇
  2023年   2篇
  2022年   3篇
  2021年   7篇
  2020年   11篇
  2019年   16篇
  2018年   10篇
  2017年   10篇
  2016年   7篇
  2015年   3篇
  2014年   13篇
  2013年   30篇
  2012年   3篇
  2011年   8篇
  2010年   4篇
  2009年   8篇
  2008年   8篇
  2007年   11篇
  2006年   6篇
  2005年   4篇
  2004年   3篇
  2003年   3篇
  2002年   3篇
  2001年   2篇
  2000年   1篇
  1998年   2篇
  1995年   2篇
  1994年   3篇
  1993年   1篇
  1992年   1篇
  1991年   1篇
  1990年   1篇
  1989年   1篇
  1988年   2篇
  1986年   3篇
  1985年   1篇
  1984年   2篇
  1983年   1篇
  1982年   2篇
  1981年   1篇
  1980年   1篇
  1979年   1篇
  1978年   2篇
  1977年   3篇
排序方式: 共有207条查询结果,搜索用时 15 毫秒
11.
Factor analysis is regularly used for analyzing survey data. Missing data, data with outliers and consequently nonnormal data are very common for data obtained through questionnaires. Based on covariance matrix estimates for such nonstandard samples, a unified approach for factor analysis is developed. By generalizing the approach of maximum likelihood under constraints, statistical properties of the estimates for factor loadings and error variances are obtained. A rescaled Bartlett-corrected statistic is proposed for evaluating the number of factors. Equivariance and invariance of parameter estimates and their standard errors for canonical, varimax, and normalized varimax rotations are discussed. Numerical results illustrate the sensitivity of classical methods and advantages of the proposed procedures.This project was supported by a University of North Texas Faculty Research Grant, Grant #R49/CCR610528 for Disease Control and Prevention from the National Center for Injury Prevention and Control, and Grant DA01070 from the National Institute on Drug Abuse. The results do not necessarily represent the official view of the funding agencies. The authors are grateful to three reviewers for suggestions that improved the presentation of this paper.  相似文献   
12.
13.
Clive D. Field 《Religion》2014,44(3):357-382
Abstract

The British religious census of 2011 is located in its broader historical and methodological context. The principal developments in the measurement of religious affiliation (proxy-assigned or self-assigned) in Britain are traced from the Reformation to the present day, charting the relative contribution of the Churches, the State and empirical social science. The key statistics which have emerged from their respective efforts are summarised, with nominal religious affiliation universal until the time of the French Revolution and preponderant until as late as the 1980s. For recent decades, when the profession of faith has been rejected by large numbers of Britons, particular attention is paid to the variant results from different question-wording. Depending upon what is asked, the proportion of the population currently making sense of their lives without asserting a confessional religious identity ranges from one-quarter to one-half. The difficulties of trying to construct a religious barometer through a single, unitary indicator are thus illuminated.  相似文献   
14.
15.
Comparing datasets, that is, sets of numbers in context, is a critical skill in higher order cognition. Although much is known about how people compare single numbers, little is known about how number sets are represented and compared. We investigated how subjects compared datasets that varied in their statistical properties, including ratio of means, coefficient of variation, and number of observations, by measuring eye fixations, accuracy, and confidence when assessing differences between number sets. Results indicated that participants implicitly create and compare approximate summary values that include information about mean and variance, with no evidence of explicit calculation. Accuracy and confidence increased, while the number of fixations decreased as sets became more distinct (i.e., as mean ratios increase and variance decreases), demonstrating that the statistical properties of datasets were highly related to comparisons. The discussion includes a model proposing how reasoners summarize and compare datasets within the architecture for approximate number representation.  相似文献   
16.
We present a simple but effective method based on Luce’s choice axiom [Luce, R.D. (1959). Individual choice behavior: A theoretical analysis. New York: John Wiley & Sons] for consistent estimation of the pairwise confusabilities of items in a multiple-choice recognition task with arbitrarily chosen choice-sets. The method combines the exact (non-asymptotic) Bayesian way of assessing uncertainty with the unbiasedness emphasized in the classical frequentist approach.We apply the method to data collected using an adaptive computer game designed for prevention of reading disability. A player’s estimated confusability of phonemes (or more accurately, phoneme-grapheme connections) and larger units of language is visualized in an easily understood way with color cues and explicit indication of the accuracy of the estimates. Visualization of learning-related changes in the player’s performance is considered.The empirical validity of the choice axiom is evaluated using the game data itself. The axiom appears to hold reasonably well although a small systematic violation is observable for the smallest choice-set sizes.  相似文献   
17.
Traditionally, multinomial processing tree (MPT) models are applied to groups of homogeneous participants, where all participants within a group are assumed to have identical MPT model parameter values. This assumption is unreasonable when MPT models are used for clinical assessment, and it often may be suspect for applications to ordinary psychological experiments. One method for dealing with parameter variability is to incorporate random effects assumptions into a model. This is achieved by assuming that participants’ parameters are drawn independently from some specified multivariate hyperdistribution. In this paper we explore the assumption that the hyperdistribution consists of independent beta distributions, one for each MPT model parameter. These beta-MPT models are ‘hierarchical models’, and their statistical inference is different from the usual approaches based on data aggregated over participants. The paper provides both classical (frequentist) and hierarchical Bayesian approaches to statistical inference for beta-MPT models. In simple cases the likelihood function can be obtained analytically; however, for more complex cases, Markov Chain Monte Carlo algorithms are constructed to assist both approaches to inference. Examples based on clinical assessment studies are provided to demonstrate the advantages of hierarchical MPT models over aggregate analysis in the presence of individual differences.  相似文献   
18.
Collaborative inhibition refers to the phenomenon that when several people work together to produce a single memory report, they typically produce fewer items than when the unique items in the individual reports of the same number of participants are combined (i.e., nominal recall). Yet, apart from this negative effect, collaboration may be beneficial in that group members remove errors from a collaborative report. Collaborative inhibition studies on memory for emotional stimuli are scarce. Therefore, the present study examined both collaborative inhibition and collaborative error reduction in the recall of the details of emotional material in a laboratory setting. Female undergraduates (n = 111) viewed a film clip of a fatal accident and subsequently engaged in either collaborative (n = 57) or individual recall (n = 54) in groups of three. The results show that, across several detail categories, collaborating groups recalled fewer details than nominal groups. However, overall, nominal recall produced more errors than collaborative recall. The present results extend earlier findings on both collaborative inhibition and error reduction to the recall of affectively laden material. These findings may have implications for the applied fields of forensic and clinical psychology.  相似文献   
19.
This paper is concerned with the implications of Husserl's phenomenological reformulation of the problem of error. Following Husserl, I argue that the phenomenon of error should not be understood as the accidental failure of a fully constituted cogito, but that it is itself constitutive of the cogito's formation. I thus show that the phenomenon of error plays a crucial role in our self-understanding as unified subjects of experience. In order to unpack this 'hermeneutical function' of error, I focus on three inter-related notions which are recurrently used by Husserl to refer to the central aspects of error apprehension: explosion (Explosion), replacement (Ersatz), and cancellation (Durchstreichung). My discussion, however, does not remain committed to the Husserlian framework as such. This is not only because Husserl's notion of explosion proves itself untenable, but because the Husserlian paradigm does not make room for a linguistic dimension intrinsic, in my view, to the realization of error. Hence, I proceed by reconstructing the Husserlian terms as tropes of realization, as narratological devices in the 'language game' of error. I argue that these hermeneutical devices are necessary for maintaining what Nietzsche would call the self's 'semblance of unity'. The assumption of one single subject is perhaps unnecessary; perhaps it is just as permissible to assume a multiplicity of subjects, whose interaction and struggle is the basis for our thought and our consciousness in general? (Nietzsche, The Will to Power #490)  相似文献   
20.
本文在对当前国内外主要心理统计学教材进行比较的基础上,指出与上个世纪八十年代的心理统计学教材内容相比较,在内容上的新探索主要体现在(1)由“假设检验”的内容中发展出“统计检验力”和“效果大小”的统计指标和估计方法;(2)引进一般线性模型来统合方差分析和回归分析这两种统计方法;(3)适度增加一些“多元统计分析”的内容等三个方面.本文对前两个方面的新内容作了简要评述,并对教材内容的编排方面提出了新的思路.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号