首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   474篇
  免费   179篇
  国内免费   71篇
  2024年   1篇
  2023年   7篇
  2022年   13篇
  2021年   24篇
  2020年   25篇
  2019年   21篇
  2018年   21篇
  2017年   25篇
  2016年   19篇
  2015年   14篇
  2014年   34篇
  2013年   59篇
  2012年   20篇
  2011年   19篇
  2010年   15篇
  2009年   27篇
  2008年   26篇
  2007年   17篇
  2006年   23篇
  2005年   19篇
  2004年   19篇
  2003年   21篇
  2002年   16篇
  2001年   16篇
  2000年   19篇
  1999年   8篇
  1998年   13篇
  1997年   11篇
  1996年   6篇
  1995年   8篇
  1994年   8篇
  1993年   11篇
  1992年   4篇
  1991年   8篇
  1990年   8篇
  1989年   11篇
  1988年   7篇
  1987年   3篇
  1986年   11篇
  1985年   8篇
  1984年   2篇
  1983年   9篇
  1982年   6篇
  1981年   10篇
  1980年   7篇
  1979年   9篇
  1978年   15篇
  1977年   9篇
  1976年   9篇
  1975年   3篇
排序方式: 共有724条查询结果,搜索用时 15 毫秒
171.
In Experiment 1 with rats, a left lever press led to a 5-s delay and then a possible reinforcer. A right lever press led to an adjusting delay and then a certain reinforcer. This delay was adjusted over trials to estimate an indifference point, or a delay at which the two alternatives were chosen about equally often. Indifference points increased as the probability of reinforcement for the left lever decreased. In some conditions with a 20% chance of food, a light above the left lever was lit during the 5-s delay on all trials, but in other conditions, the light was only lit on those trials that ended with food. Unlike previous results with pigeons, the presence or absence of the delay light on no-food trials had no effect on the rats' indifference points. In other conditions, the rats showed less preference for the 20% alternative when the time between trials was longer. In Experiment 2 with rats, fixed-interval schedules were used instead of simple delays, and the presence or absence of the fixed-interval requirement on no-food trials had no effect on the indifference points. In Experiment 3 with rats and Experiment 4 with pigeons, the animals chose between a fixed-ratio 8 schedule that led to food on 33% of the trials and an adjusting-ratio schedule with food on 100% of the trials. Surprisingly, the rats showed less preference for the 33% alternative in conditions in which the ratio requirement was omitted on no-food trials. For the pigeons, the presence or absence of the ratio requirement on no-food trials had little effect. The results suggest that there may be differences between rats and pigeons in how they respond in choice situations involving delayed and probabilistic reinforcers.  相似文献   
172.
In this paper, we apply sequential one-sided confidence interval estimation procedures with β-protection to adaptive mastery testing. The procedures of fixed-width and fixed proportional accuracy confidence interval estimation can be viewed as extensions of one-sided confidence interval procedures. It can be shown that the adaptive mastery testing procedure based on a one-sided confidence interval with β-protection is more efficient in terms of test length than a testing procedure based on a two-sided/fixed-width confidence interval. Some simulation studies applying the one-sided confidence interval procedure and its extensions mentioned above to adaptive mastery testing are conducted. For the purpose of comparison, we also have a numerical study of adaptive mastery testing based on Wald's sequential probability ratio test. The comparison of their performances is based on the correct classification probability, averages of test length, as well as the width of the “indifference regions.” From these empirical results, we found that applying the one-sided confidence interval procedure to adaptive mastery testing is very promising.  相似文献   
173.
Although the Bock–Aitkin likelihood-based estimation method for factor analysis of dichotomous item response data has important advantages over classical analysis of item tetrachoric correlations, a serious limitation of the method is its reliance on fixed-point Gauss-Hermite (G-H) quadrature in the solution of the likelihood equations and likelihood-ratio tests. When the number of latent dimensions is large, computational considerations require that the number of quadrature points per dimension be few. But with large numbers of items, the dispersion of the likelihood, given the response pattern, becomes so small that the likelihood cannot be accurately evaluated with the sparse fixed points in the latent space. In this paper, we demonstrate that substantial improvement in accuracy can be obtained by adapting the quadrature points to the location and dispersion of the likelihood surfaces corresponding to each distinct pattern in the data. In particular, we show that adaptive G-H quadrature, combined with mean and covariance adjustments at each iteration of an EM algorithm, produces an accurate fast-converging solution with as few as two points per dimension. Evaluations of this method with simulated data are shown to yield accurate recovery of the generating factor loadings for models of upto eight dimensions. Unlike an earlier application of adaptive Gibbs sampling to this problem by Meng and Schilling, the simulations also confirm the validity of the present method in calculating likelihood-ratio chi-square statistics for determining the number of factors required in the model. Finally, we apply the method to a sample of real data from a test of teacher qualifications.  相似文献   
174.
The authors introduce subset conjunction as a classification rule by which an acceptable alternative must satisfy some minimum number of criteria. The rule subsumes conjunctive and disjunctive decision strategies as special cases. Subset conjunction can be represented in a binary-response model, for example, in a logistic regression, using only main effects or only interaction effects. This results in a confounding of the main and interaction effects when there is little or no response error. With greater response error, a logistic regression, even if it gives a good fit to data, can produce parameter estimates that do not reflect the underlying decision process. The authors propose a model in which the binary classification of alternatives into acceptable/unacceptable categories is based on a probabilistic implementation of a subset-conjunctive process. The satisfaction of decision criteria biases the odds toward one outcome or the other. The authors then describe a two-stage choice model in which a (possibly large) set of alternatives is first reduced using a subset-conjunctive rule, after which an alternative is selected from this reduced set of items. They describe methods for estimating the unobserved consideration probabilities from classification and choice data, and illustrate the use of the models for cancer diagnosis and consumer choice. They report the results of simulations investigating estimation accuracy, incidence of local optima, and model fit. The authors thank the Editor, the Associate Editor, and three anonymous reviewers for their constructive suggestions, and also thank Asim Ansari and Raghuram Iyengar for their helpful comments. They also thank Sawtooth Software, McKinsey and Company, and Intelliquest for providing the PC choice data, and the University of Wisconsin for making the breast-cancer data available at the machine learning archives.  相似文献   
175.
176.
The present study examines controlled and automatic uses of memory in clinically depressed patients by applying the Process Dissociation Procedure developed by Jacoby (1991) to a stem completion memory task with short and long retention intervals. The results show that the contribution of controlled processes is lower in depressed patients than in controls, especially for the longest retention interval, whereas the contribution of automatic processes is equivalent in both groups and unaffected by the length of the retention interval. These findings are discussed in a cognitive control framework.  相似文献   
177.
Liechty, Pieters & Wedel (2003) developed a hidden Markov Model (HMM) to identify the states of an attentional process in an advertisement viewing task. This work is significant because it demonstrates the benefits of stochastic modeling and Bayesian estimation in making inferences about cognitive processes based on eye movement data. One limitation of the proposed approach is that attention is conceptualized as an autonomous random process that is affected neither by the overall layout of the stimulus nor by the visual information perceived during the current fixation. An alternative model based on the input-output hidden Markov model (IOHMM; Bengio, 1999) is suggested as an extension of the HMM. The need for further studies that validate the HMM classification results is also discussed.  相似文献   
178.
Three approaches to the analysis of main and interaction effect hypotheses in nonorthogonal designs were compared in a 2×2 design for data that was neither normal in form nor equal in variance. The approaches involved either least squares or robust estimators of central tendency and variability and/or a test statistic that either pools or does not pool sources of variance. Specifically, we compared the ANOVA F test which used trimmed means and Winsorized variances, the Welch-James test with the usual least squares estimators for central tendency and variability and the Welch-James test using trimmed means and Winsorized variances. As hypothesized, we found that the latter approach provided excellent Type I error control, whereas the former two did not.Financial support for this research was provided by grants to the first author from the National Sciences and Engineering Research Council of Canada (#OGP0015855) and the Social Sciences and Humanities Research Council (#410-95-0006). The authors would like to express their appreciation to the Associate Editor as well as the reviewers who provided valuable comments on an earlier version of this paper.  相似文献   
179.
Hierarchical Bayes procedures for the two-parameter logistic item response model were compared for estimating item and ability parameters. Simulated data sets were analyzed via two joint and two marginal Bayesian estimation procedures. The marginal Bayesian estimation procedures yielded consistently smaller root mean square differences than the joint Bayesian estimation procedures for item and ability estimates. As the sample size and test length increased, the four Bayes procedures yielded essentially the same result.The authors wish to thank the Editor and anonymous reviewers for their insightful comments and suggestions.  相似文献   
180.
A size estimation (SE) paradigm and the Mueller-Lyer (ML) illusion were used to examine perceptual disturbances in schizophrenics. 35 reliably diagnosed (DSM III-R) schizophrenics were compared to 20 subjects with no history of psychiatric illness. Perceptual distortions found in previous studies of schizophrenics were only to a certain extent confirmed in the present results. More overestimators were found among the schizophrenics than among the normals on the SE task. The schizophrenics, first of all the chronic patients, also proved to be more prone to the Mueller-Lyer illusion. A reason why the very clear differences between schizophrenics and normals found in previous examinations were not confirmed in the present study, might be that a reliable diagnostic instrument was for the first time used in this kind of study.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号