首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   173篇
  免费   22篇
  国内免费   6篇
  2023年   1篇
  2022年   5篇
  2021年   4篇
  2020年   13篇
  2019年   6篇
  2018年   8篇
  2017年   10篇
  2016年   7篇
  2015年   2篇
  2014年   6篇
  2013年   19篇
  2011年   9篇
  2010年   3篇
  2009年   11篇
  2008年   7篇
  2007年   11篇
  2006年   6篇
  2005年   6篇
  2004年   3篇
  2003年   2篇
  2002年   9篇
  2001年   8篇
  2000年   5篇
  1999年   3篇
  1998年   1篇
  1995年   1篇
  1994年   2篇
  1992年   1篇
  1991年   3篇
  1988年   1篇
  1987年   3篇
  1985年   2篇
  1984年   2篇
  1983年   1篇
  1982年   1篇
  1981年   2篇
  1980年   3篇
  1979年   2篇
  1978年   3篇
  1977年   2篇
  1976年   5篇
  1975年   2篇
排序方式: 共有201条查询结果,搜索用时 15 毫秒
71.
Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.  相似文献   
72.
We tested the influence of causal links on the production of memory errors in a misinformation paradigm. Participants studied a set of statements about a person, which were presented as either individual statements or pairs of causally linked statements. Participants were then provided with causally plausible and causally implausible misinformation. We hypothesised that studying information connected with causal links would promote representing information in a more abstract manner. As such, we predicted that causal information would not provide an overall protection against memory errors, but rather would preferentially help in the rejection of misinformation that was causally implausible, given the learned causal links. In two experiments, we measured whether the causal linkage of information would be generally protective against all memory errors or only selectively protective against certain types of memory errors. Causal links helped participants reject implausible memory lures, but did not protect against plausible lures. Our results suggest that causal information may promote an abstract storage of information that helps prevent only specific types of memory errors.  相似文献   
73.
In three experiments, we examine the extent to which participants’ memory errors are affected by the perceptual features of an encoding series and imagery generation processes. Perceptual features were examined by manipulating the features associated with individual items as well as the relationships among items. An encoding instruction manipulation was included to examine the effects of explicit requests to generate images. In all three experiments, participants falsely claimed to have seen pictures of items presented as words, committing picture misattribution errors. These misattribution errors were exaggerated when the perceptual resemblance between pictures and images was relatively high (Experiment 1) and when explicit requests to generate images were omitted from encoding instructions (Experiments 1 and 2). When perceptual cues made the thematic relationships among items salient, the level and pattern of misattribution errors were also affected (Experiments 2 and 3). Results address alternative views about the nature of internal representations resulting in misattribution errors and refute the idea that these errors reflect only participants’ general impressions or beliefs about what was seen.  相似文献   
74.
In this paper, the ontological, terminological, epistemological, and ethical aspects of omission are considered in a coherent and balanced framework, based on the idea that there are omissions which are actions and omissions which are non-actions. In particular, we suggest that the approach to causation which best deals with omission is Mackie's INUS conditional proposal. We argue that omissions are determined partly by the ontological conditional structure of reality, and partly by the interests, beliefs, and values of observers. The final upshot is that moral judgments involved in cases of omissions cannot be grounded on, but are the ground for judgments about what INUS conditions count as omissions.  相似文献   
75.
We investigated the relationship between different kinds of target reports in a rapid serial visual presentation task, and their associated perceptual experience. Participants reported the identity of two targets embedded in a stream of stimuli and their associated subjective visibility. In our task, target stimuli could be combined together to form more complex ones, thus allowing participants to report temporally integrated percepts. We found that integrated percepts were associated with high subjective visibility scores, whereas reports in which the order of targets was reversed led to a poorer perceptual experience. We also found a reciprocal relationship between the chance of the second target not being reported correctly and the perceptual experience associated with the first one. Principally, our results indicate that integrated percepts are experienced as a unique, clear perceptual event, whereas order reversals are experienced as confused, similar to cases in which an entirely wrong response was given.  相似文献   
76.
Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error ↔ attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention lapses; resource-depleting cognitions interfering with attention to subsequent task challenges. Attention lapses lead to errors, and errors themselves are a potent consequence often leading to further attention lapses potentially initiating a spiral into more serious errors. We investigated this challenge-induced error ↔ attention-lapse model using the Sustained Attention to Response Task (SART), a GO–NOGO task requiring continuous attention and response to a number series and withholding of responses to a rare NOGO digit. We found response speed and increased commission errors following task challenges to be a function of temporal distance from, and prior performance on, previous NOGO trials. We conclude by comparing and contrasting the present theory and findings to those based on choice paradigms and argue that the present findings have implications for the generality of conflict monitoring and control models.  相似文献   
77.
Goldrick M  Larson M 《Cognition》2008,107(3):1155-1164
Speakers are faster and more accurate at processing certain sound sequences within their language. Does this reflect the fact that these sequences are frequent or that they are phonetically less complex (e.g., easier to articulate)? It has been difficult to contrast these two factors given their high correlation in natural languages. In this study, participants were exposed to novel phonotactic constraints de-correlating complexity and frequency by subjecting the same phonological structure to varying degrees of probabilistic constraint. Participants' behavior was sensitive to variations in frequency, demonstrating that phonotactic probability influences speech production independent of phonetic complexity.  相似文献   
78.
The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the population sampled has the covariance structure assumed. Commonly used covariance structure analysis software uses parametric methods for estimating parameters and standard errors. When the population sampled has the covariance structure assumed, but fails to have the distributional form assumed, the parameter estimates usually remain consistent, but the standard error estimates do not. This has motivated the introduction of a variety of nonparametric standard error estimates that are consistent when the population sampled fails to have the distributional form assumed. The only distributional assumption these require is that the covariance structure be correctly specified. As noted, even this assumption is not required for the infinitesimal jackknife. The relation between the infinitesimal jackknife and other nonparametric standard error estimators is discussed. An advantage of the infinitesimal jackknife over the jackknife and the bootstrap is that it requires only one analysis to produce standard error estimates rather than one for every jackknife or bootstrap sample.  相似文献   
79.
Abstract: Exploratory methods using second‐order components and second‐order common factors were proposed. The second‐order components were obtained from the resolution of the correlation matrix of obliquely rotated first‐order principal components. The standard errors of the estimates of the second‐order component loadings were derived from an augmented information matrix with restrictions for the loadings and associated parameters. The second‐order factor analysis proposed was similar to the classical method in that the factor correlations among the first‐order factors were further resolved by the exploratory method of factor analysis. However, in this paper the second‐order factor loadings were estimated by the generalized least squares using the asymptotic variance‐covariance matrix for the first‐order factor correlations. The asymptotic standard errors for the estimates of the second‐order factor loadings were also derived. A numerical example was presented with simulated results.  相似文献   
80.
郝学芹  韩凯 《心理学报》2002,34(4):32-36
提出FOK研究中一种新的标准测验形式——过度重复学习,并验证过度重复学习作为FOK研究的标准测验的可行性;另外还比较了作为标准测验,过度重复学习与再认的差别。同时,还探索了FOK判断中存在的两种错误类型——替代性错误和忽略性错误在FOK判断等级和标准测验成绩方面的差异。两个实验结果表明:FOK判断在预期随后的标准测验(过度重复学习和再认)成绩上是有准确性的;过度重复学习可以作为一种标准测验。实验结果还显示:替代性错误比忽略性错误的FOK判断等级要高;且无论是用再认还是用过度重复学习做标准测验,都能检验出两种错误类型项目在记忆激活强度上的差异。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号