全文获取类型
收费全文 | 644篇 |
免费 | 57篇 |
国内免费 | 29篇 |
出版年
2024年 | 1篇 |
2023年 | 22篇 |
2022年 | 17篇 |
2021年 | 34篇 |
2020年 | 32篇 |
2019年 | 46篇 |
2018年 | 40篇 |
2017年 | 42篇 |
2016年 | 58篇 |
2015年 | 38篇 |
2014年 | 27篇 |
2013年 | 94篇 |
2012年 | 19篇 |
2011年 | 38篇 |
2010年 | 17篇 |
2009年 | 27篇 |
2008年 | 32篇 |
2007年 | 17篇 |
2006年 | 19篇 |
2005年 | 14篇 |
2004年 | 8篇 |
2003年 | 12篇 |
2002年 | 10篇 |
2001年 | 5篇 |
2000年 | 5篇 |
1999年 | 4篇 |
1998年 | 4篇 |
1997年 | 2篇 |
1996年 | 2篇 |
1995年 | 5篇 |
1994年 | 2篇 |
1993年 | 2篇 |
1992年 | 3篇 |
1991年 | 1篇 |
1990年 | 1篇 |
1989年 | 4篇 |
1987年 | 1篇 |
1986年 | 2篇 |
1985年 | 1篇 |
1984年 | 1篇 |
1982年 | 2篇 |
1980年 | 1篇 |
1979年 | 2篇 |
1978年 | 4篇 |
1977年 | 5篇 |
1976年 | 6篇 |
1975年 | 1篇 |
排序方式: 共有730条查询结果,搜索用时 31 毫秒
81.
《Quarterly journal of experimental psychology (2006)》2013,66(9):1826-1841
In recognition memory, increasing the strength of studied items does not reduce performance on other items, an effect dubbed the null list strength effect (LSE). While this finding has been replicated many times, it has rarely been tested using stimuli other than single words. Kinnell and Dennis (2012) recently tested for the presence of list length effects using non-word stimulus classes while controlling for the confounds that are present in list length designs. Small list length effects were found for fractal and face images. We adopted the same paradigm and stimuli used by Kinnell and Dennis to test whether these stimuli would be susceptible to list strength effects as well. We found significant LSEs for fractal images, but null LSEs for face images and natural scene photographs. Stimuli other than words do appear to be susceptible to list strength effects, but these effects are small and restricted to particular stimulus classes, as is the case in list length designs. Models of memory may be able to address differences between these stimulus classes by attributing differences in representational overlap between the stimulus classes. 相似文献
82.
《Journal of Cognitive Psychology》2013,25(5-6):605-618
ABSTRACTIn this paper, we propose self-organising maps as possible candidates to explain the psychological mechanisms underlying category generalisation. Self-organising maps are psychologically and biologically plausible neural network models that can learn after limited exposure to positive category examples, without any need of contrastive information. They reproduce human behaviour in category generalisation, in particular, the Numerosity and Variability effects, which are usually explained with Bayesian tools. Where category generalisation is concerned, self-organising maps deserve attention to bridge the gap between the computational level of analysis in Marr's hierarchy (where Bayesian models are often situated) and the algorithmic level of analysis in which plausible mechanisms are described. 相似文献
83.
Perceived product instrumentality (PPI) is a new construct that is proposed as a key process component of a general model of family purchasing behaviour. PPI reflects the degree to which consumers, apprehended as actors of social roles, deem a product to be helpful, facilitative of role performance, compatible with role identity and congruent with the self‐concept. The objective of this paper is threefold: (1) assess the PPI unidimensionality and reliability; (2) purify the PPI scale, and (3) assess its validity. First, a pilot survey was administered to a convenience sample of men and women, who filled in four identical lists of 33 items tapping their attitudes towards durables, and exploratory factor analysis was conducted on each set to explore the overall pattern of the items relationships. Five try‐out pools of different sizes (33, 28, 15, 13 and 9 items) were involved in the analysis. The 15‐item scale was retained. Secondly, a large‐scale survey was administered to 500 couples as part of an extensive research involving comprehensive model testing. Exploratory factor analysis was conducted on the whole sample for reliability and unidimensionality assessment. At times, the analysis is done on men's and women's sub‐samples separately in order to account for eventual differences among both populations. Thirdly, confirmatory factor was conducted, splitting the sample into two random halves: the generation sample and the validation sample. The first half served for the PPI scales purification. In this case, PPI was posited as the latent variable and the scale items were posited as the manifest ones. The second served to validate the PPI theory in a system's framework: PPI was posited as a latent dependent variable while other role orientations variables were posited as latent independent variables. Copyright © 2002 Henry Stewart Publications. 相似文献
84.
85.
Why Higher Working Memory Capacity May Help You Learn: Sampling,Search, and Degrees of Approximation
Algorithms for approximate Bayesian inference, such as those based on sampling (i.e., Monte Carlo methods), provide a natural source of models of how people may deal with uncertainty with limited cognitive resources. Here, we consider the idea that individual differences in working memory capacity (WMC) may be usefully modeled in terms of the number of samples, or “particles,” available to perform inference. To test this idea, we focus on two recent experiments that report positive associations between WMC and two distinct aspects of categorization performance: the ability to learn novel categories, and the ability to switch between different categorization strategies (“knowledge restructuring”). In favor of the idea of modeling WMC as a number of particles, we show that a single model can reproduce both experimental results by varying the number of particles—increasing the number of particles leads to both faster category learning and improved strategy‐switching. Furthermore, when we fit the model to individual participants, we found a positive association between WMC and best‐fit number of particles for strategy switching. However, no association between WMC and best‐fit number of particles was found for category learning. These results are discussed in the context of the general challenge of disentangling the contributions of different potential sources of behavioral variability. 相似文献
86.
Making judgments by relying on beliefs about the causal relationships between events is a fundamental capacity of everyday cognition. In the last decade, Causal Bayesian Networks have been proposed as a framework for modeling causal reasoning. Two experiments were conducted to provide comprehensive data sets with which to evaluate a variety of different types of judgments in comparison to the standard Bayesian networks calculations. Participants were introduced to a fictional system of three events and observed a set of learning trials that instantiated the multivariate distribution relating the three variables. We tested inferences on chains X1 → Y → X2, common cause structures X1 ← Y → X2, and common effect structures X1 → Y ← X2, on binary and numerical variables, and with high and intermediate causal strengths. We tested transitive inferences, inferences when one variable is irrelevant because it is blocked by an intervening variable (Markov Assumption), inferences from two variables to a middle variable, and inferences about the presence of one cause when the alternative cause was known to have occurred (the normative “explaining away” pattern). Compared to the normative account, in general, when the judgments should change, they change in the normative direction. However, we also discuss a few persistent violations of the standard normative model. In addition, we evaluate the relative success of 12 theoretical explanations for these deviations. 相似文献
87.
Joseph Lee Rodgers 《Multivariate behavioral research》2016,51(1):30-34
The Bayesian-frequentist debate typically portrays these statistical perspectives as opposing views. However, both Bayesian and frequentist statisticians have expanded their epistemological basis away from a singular focus on the null hypothesis, to a broader perspective involving the development and comparison of competing statistical/mathematical models. For frequentists, statistical developments such as structural equation modeling and multilevel modeling have facilitated this transition. For Bayesians, the Bayes factor has facilitated this transition. The Bayes factor is treated in articles within this issue of Multivariate Behavioral Research. The current presentation provides brief commentary on those articles and more extended discussion of the transition toward a modern modeling epistemology. In certain respects, Bayesians and frequentists share common goals. 相似文献
88.
Hal S. Stern 《Multivariate behavioral research》2016,51(1):23-29
Procedures used for statistical inference are receiving increased scrutiny as the scientific community studies the factors associated with insuring reproducible research. This note addresses recent negative attention directed at p values, the relationship of confidence intervals and tests, and the role of Bayesian inference and Bayes factors, with an eye toward better understanding these different strategies for statistical inference. We argue that researchers and data analysts too often resort to binary decisions (e.g., whether to reject or accept the null hypothesis) in settings where this may not be required. 相似文献
89.
Small-sample inference with clustered data has received increased attention recently in the methodological literature, with several simulation studies being presented on the small-sample behavior of many methods. However, nearly all previous studies focus on a single class of methods (e.g., only multilevel models, only corrections to sandwich estimators), and the differential performance of various methods that can be implemented to accommodate clustered data with very few clusters is largely unknown, potentially due to the rigid disciplinary preferences. Furthermore, a majority of these studies focus on scenarios with 15 or more clusters and feature unrealistically simple data-generation models with very few predictors. This article, motivated by an applied educational psychology cluster randomized trial, presents a simulation study that simultaneously addresses the extreme small sample and differential performance (estimation bias, Type I error rates, and relative power) of 12 methods to account for clustered data with a model that features a more realistic number of predictors. The motivating data are then modeled with each method, and results are compared. Results show that generalized estimating equations perform poorly; the choice of Bayesian prior distributions affects performance; and fixed effect models perform quite well. Limitations and implications for applications are also discussed. 相似文献
90.
Factor analysis is a popular statistical technique for multivariate data analysis. Developments in the structural equation modeling framework have enabled the use of hybrid confirmatory/exploratory approaches in which factor-loading structures can be explored relatively flexibly within a confirmatory factor analysis (CFA) framework. Recently, Muthén & Asparouhov proposed a Bayesian structural equation modeling (BSEM) approach to explore the presence of cross loadings in CFA models. We show that the issue of determining factor-loading patterns may be formulated as a Bayesian variable selection problem in which Muthén and Asparouhov's approach can be regarded as a BSEM approach with ridge regression prior (BSEM-RP). We propose another Bayesian approach, denoted herein as the Bayesian structural equation modeling with spike-and-slab prior (BSEM-SSP), which serves as a one-stage alternative to the BSEM-RP. We review the theoretical advantages and disadvantages of both approaches and compare their empirical performance relative to two modification indices-based approaches and exploratory factor analysis with target rotation. A teacher stress scale data set is used to demonstrate our approach. 相似文献