首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   87篇
  免费   16篇
  国内免费   10篇
  113篇
  2023年   5篇
  2022年   2篇
  2021年   8篇
  2020年   4篇
  2019年   7篇
  2018年   7篇
  2017年   2篇
  2016年   3篇
  2015年   3篇
  2014年   4篇
  2013年   7篇
  2012年   1篇
  2011年   1篇
  2010年   2篇
  2009年   6篇
  2008年   4篇
  2007年   2篇
  2006年   3篇
  2005年   3篇
  2004年   2篇
  2003年   3篇
  2002年   3篇
  2001年   3篇
  2000年   5篇
  1999年   2篇
  1996年   2篇
  1995年   1篇
  1994年   1篇
  1992年   1篇
  1991年   1篇
  1990年   1篇
  1989年   2篇
  1988年   2篇
  1987年   3篇
  1986年   1篇
  1984年   1篇
  1981年   1篇
  1980年   1篇
  1979年   1篇
  1978年   2篇
排序方式: 共有113条查询结果,搜索用时 0 毫秒
11.
The multisensory response enhancement (MRE), occurring when the response to a visual target integrated with a spatially congruent sound is stronger than the response to the visual target alone, is believed to be mediated by the superior colliculus (SC) (Stein & Meredith, 1993). Here, we used a focused attention paradigm to show that the spatial congruency effect occurs with red (SC-effective) but not blue (SC-ineffective) visual stimuli, when presented with spatially congruent sounds. To isolate the chromatic component of SC-ineffective targets and to demonstrate the selectivity of the spatial congruency effect we used the random luminance modulation technique (Experiment 1) and the tritanopic technique (Experiment 2). Our results indicate that the spatial congruency effect does not require the distribution of attention over different sensory modalities and provide correlational evidence that the SC mediates the effect.  相似文献   
12.
Food‐deprived rats in Experiment 1 responded to one of two tandem schedules that were, with equal probability, associated with a sample lever. The tandem schedules' initial links were different random‐interval schedules. Their values were adjusted to approximate equality in time to completing each tandem schedule's response requirements. The tandem schedules differed in their terminal links: One reinforced short interresponse times; the other reinforced long ones. Tandem‐schedule completion presented two comparison levers, one of which was associated with each tandem schedule. Pressing the lever associated with the sample‐lever tandem schedule produced a food pellet. Pressing the other produced a blackout. The difference between terminal‐link reinforced interresponse times varied across 10‐trial blocks within a session. Conditional‐discrimination accuracy increased with the size of the temporal difference between terminal‐link reinforced interresponse times. In Experiment 2, one tandem schedule was replaced by a random ratio, while the comparison schedule was either a tandem schedule that only reinforced long interresponse times or a random‐interval schedule. Again, conditional‐discrimination accuracy increased with the temporal difference between the two schedules' reinforced interresponse times. Most rats mastered the discrimination between random ratio and random interval, showing that the interresponse times reinforced by these schedules can serve to discriminate between these schedules.  相似文献   
13.
A standard approach to distinguishing people’s risk preferences is to estimate a random utility model using a power utility function to characterize the preferences and a logit function to capture choice consistency. We demonstrate that with often-used choice situations, this model suffers from empirical underidentification, meaning that parameters cannot be estimated precisely. With simulations of estimation accuracy and Kullback–Leibler divergence measures we examined factors that potentially mitigate this problem. First, using a choice set that guarantees a switch in the utility order between two risky gambles in the range of plausible values leads to higher estimation accuracy than randomly created choice sets or the purpose-built choice sets common in the literature. Second, parameter estimates are regularly correlated, which contributes to empirical underidentification. Examining standardizations of the utility scale, we show that they mitigate this correlation and additionally improve the estimation accuracy for choice consistency. Yet, they can have detrimental effects on the estimation accuracy of risk preference. Finally, we also show how repeated versus distinct choice sets and an increase in observations affect estimation accuracy. Together, these results should help researchers make informed design choices to estimate parameters in the random utility model more precisely.  相似文献   
14.
The Adams-Creamer hypothesis states that S uses the decay of proprioceptive feedback from an early portion of a movement to cue a timed response at some later time. This hypothesis was tested by creating passive left-arm movement in one group and withholding it from another, and having Ss make a right-hand response when exactly 2.0 sec. had elapsed since the end of the movement Ss with left-arm feedback had less absolute and algebraic error, and greater within-S consistency than did the no-movement control Ss and, when KR was withdrawn, Ss with left-arm movement regressed less than did Ss without the left-arm movement, which provided 2 lines of support for the decay hypothesis.  相似文献   
15.
Multilevel factor analysis models are widely used in the social sciences to account for heterogeneity in mean structures. In this paper we extend previous work on multilevel models to account for general forms of heterogeneity in confirmatory factor analysis models. We specify various models of mean and covariance heterogeneity in confirmatory factor analysis and develop Markov Chain Monte Carlo (MCMC) procedures to perform Bayesian inference, model checking, and model comparison.We test our methodology using synthetic data and data from a consumption emotion study. The results from synthetic data show that our Bayesian model perform well in recovering the true parameters and selecting the appropriate model. More importantly, the results clearly illustrate the consequences of ignoring heterogeneity. Specifically, we find that ignoring heterogeneity can lead to sign reversals of the factor covariances, inflation of factor variances and underappreciation of uncertainty in parameter estimates. The results from the emotion study show that subjects vary both in means and covariances. Thus traditional psychometric methods cannot fully capture the heterogeneity in our data.  相似文献   
16.
Is the “hot‐hands” phenomenon a misperception of random events?   总被引:1,自引:0,他引:1  
T. Gilovich, R. Vallone, and A. Tversky (1985) asked whether the so-called hot-hands phenomenon – a temporary elevation of the probability of successful shots – actually exists in basketball. They concluded that hot-hands are misperceived random events. This paper re-examines the truth of their conclusion. The present study's main concern was the sensitivity of the statistical tests used in Gilovich et al.'s research. Simulated records of shots over a season were used. These represented many different situations and players, but they always contained at least one hot-hand period. The issue was whether Gilovich et al.'s tests were sensitive enough to detect the hot-hands embedded in the records. The study found that this sensitivity depends on the frequency of hot-hand periods, the total number of shots in all hot-hand periods, the number of shots in each hot-hand period, and the size of the increase in the probability of successful shots in hot-hand periods. However, when the values of those variables were set realistically, on average the tests could detect only about 12% of the hot-hands phenomena.  相似文献   
17.
Examinee‐selected item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set, always yields incomplete data (i.e., when only the selected items are answered, data are missing for the others) that are likely non‐ignorable in likelihood inference. Standard item response theory (IRT) models become infeasible when ESI data are missing not at random (MNAR). To solve this problem, the authors propose a two‐dimensional IRT model that posits one unidimensional IRT model for observed data and another for nominal selection patterns. The two latent variables are assumed to follow a bivariate normal distribution. In this study, the mirt freeware package was adopted to estimate parameters. The authors conduct an experiment to demonstrate that ESI data are often non‐ignorable and to determine how to apply the new model to the data collected. Two follow‐up simulation studies are conducted to assess the parameter recovery of the new model and the consequences for parameter estimation of ignoring MNAR data. The results of the two simulation studies indicate good parameter recovery of the new model and poor parameter recovery when non‐ignorable missing data were mistakenly treated as ignorable.  相似文献   
18.
The conjunction fallacy occurs when people judge a conjunctive statement B‐and‐A to be more probable than a constituent B, in contrast to the law of probability that P(B ∧ A) cannot exceed P(B) or P(A). Researchers see this fallacy as demonstrating that people do not follow probability theory when judging conjunctive probability. This paper shows that the conjunction fallacy can be explained by the standard probability theory equation for conjunction if we assume random variation in the constituent probabilities used in that equation. The mathematical structure of this equation is such that random variation will be most likely to produce the fallacy when one constituent has high probability and the other low, when there is positive conditional support between the constituents, when there are two rather than three constituents, and when people rank probabilities rather than give numerical estimates. The conjunction fallacy has been found to occur most frequently in exactly these situations. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
19.
Ke-Hai Yuan 《Psychometrika》2009,74(2):233-256
When data are not missing at random (NMAR), maximum likelihood (ML) procedure will not generate consistent parameter estimates unless the missing data mechanism is correctly modeled. Understanding NMAR mechanism in a data set would allow one to better use the ML methodology. A survey or questionnaire may contain many items; certain items may be responsible for NMAR values in other items. The paper develops statistical procedures to identify the responsible items. By comparing ML estimates (MLE), statistics are developed to test whether the MLEs are changed when excluding items. The items that cause a significant change of the MLEs are responsible for the NMAR mechanism. Normal distribution is used for obtaining the MLEs; a sandwich-type covariance matrix is used to account for distribution violations. The class of nonnormal distributions within which the procedure is valid is provided. Both saturated and structural models are considered. Effect sizes are also defined and studied. The results indicate that more missing data in a sample does not necessarily imply more significant test statistics due to smaller effect sizes. Knowing the true population means and covariances or the parameter values in structural equation models may not make things easier either. The research was supported by NSF grant DMS04-37167, the James McKeen Cattell Fund.  相似文献   
20.
A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution then requires an integral equation that is of dimension equal to the number of nonlinear terms. For nonlinear functions that have linear coefficients, the improvement in computational speed and accuracy using the new algorithm can be dramatic. An illustration of the method with repeated measures data from a learning experiment is presented.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号