全文获取类型
收费全文 | 646篇 |
免费 | 53篇 |
国内免费 | 42篇 |
出版年
2024年 | 3篇 |
2023年 | 26篇 |
2022年 | 26篇 |
2021年 | 28篇 |
2020年 | 45篇 |
2019年 | 41篇 |
2018年 | 33篇 |
2017年 | 39篇 |
2016年 | 42篇 |
2015年 | 14篇 |
2014年 | 22篇 |
2013年 | 59篇 |
2012年 | 11篇 |
2011年 | 12篇 |
2010年 | 8篇 |
2009年 | 16篇 |
2008年 | 16篇 |
2007年 | 19篇 |
2006年 | 20篇 |
2005年 | 21篇 |
2004年 | 16篇 |
2003年 | 12篇 |
2002年 | 19篇 |
2001年 | 9篇 |
2000年 | 11篇 |
1999年 | 6篇 |
1998年 | 8篇 |
1997年 | 6篇 |
1996年 | 6篇 |
1995年 | 16篇 |
1994年 | 3篇 |
1993年 | 8篇 |
1992年 | 6篇 |
1991年 | 7篇 |
1990年 | 6篇 |
1989年 | 12篇 |
1988年 | 9篇 |
1987年 | 5篇 |
1986年 | 5篇 |
1985年 | 4篇 |
1984年 | 3篇 |
1983年 | 6篇 |
1982年 | 6篇 |
1981年 | 3篇 |
1980年 | 4篇 |
1979年 | 15篇 |
1978年 | 9篇 |
1977年 | 8篇 |
1976年 | 7篇 |
1974年 | 2篇 |
排序方式: 共有741条查询结果,搜索用时 15 毫秒
21.
David G. Schlundt Clyde P. Donahoe Jr. 《Journal of psychopathology and behavioral assessment》1983,5(4):309-316
A problem arises in analyzing the existence of interdependence between the behavioral sequences of two individuals: tests involving a statistic such as chi-square assume independent observations within each behavioral sequence, a condition which may not exist in actual practice. Using Monte Carlo simulations of binomial data sequences, we found that the use of a chi-square test frequently results in unacceptable Type I error rates when the data sequences are autocorrelated. We compared these results to those from two other methods designed specifically for testing for intersequence independence in the presence of intrasequence autocorrelation. The first method directly tests the intersequence correlation using an approximation of the variance of the intersequence correlation estimated from the sample autocorrelations. The second method uses tables of critical values of the intersequence correlation computed by Nakamuraet al. (J. Am. Stat. Assoc., 1976,71, 214–222). Although these methods were originally designed for normally distributed data, we found that both methods produced much better results than the uncorrected chi-square test when applied to binomial autocorrelated sequences. The superior method appears to be the variance approximation method, which resulted in Type I error rates that were generally less than or equal to 5% when the level of significance was set at .05. 相似文献
22.
Dorothy T. Thayer 《Psychometrika》1983,48(2):293-297
Consider an old testX consisting ofs sections and two new testsY andZ similar toX consisting ofp andq sections respectively. All subjects are given testX plus two variable sections from either testY orZ. Different pairings of variable sections are given to each subsample of subjects. We present a method of estimating the covariance matrix of the combined test (X
1, ...,X
s
,Y
1, ...,Y
p
,Z
1, ...,Z
q
) and describe an application of these estimation techniques to linear, observed-score, test equating.The author is indebted to Paul W. Holland and Donald B. Rubin for their encouragement and many helpful comments and suggestions that contributed significantly to the development of this paper.This research was supported by the Program Statistics Research Project of the ETS Research Statistics Group. 相似文献
23.
We assume that a judge's task is to categorize each ofN subjects into one ofr known classes. The design of primary interest is employed if the judge is presented withs groups, each containingr subjects, such that each group of sizer consists of exactly one subject of each of ther types. The probability distribution for the total number of correct choices is developed and used to test the null hypothesis that the judge is guessing in favor of the alternative that he or she is operating at a better than chance level. The power of the procedure is shown to be superior to two other procedures which appear in the literature.The authors are grateful for the suggestions of the referees and for computer funding provided by the Northeast Regional Data Center at the University of Florida. 相似文献
24.
The vast majority of existing multidimensional scaling (MDS) procedures devised for the analysis of paired comparison preference/choice judgments are typically based on either scalar product (i.e., vector) or unfolding (i.e., ideal-point) models. Such methods tend to ignore many of the essential components of microeconomic theory including convex indifference curves, constrained utility maximization, demand functions, et cetera. This paper presents a new stochastic MDS procedure called MICROSCALE that attempts to operationalize many of these traditional microeconomic concepts. First, we briefly review several existing MDS models that operate on paired comparisons data, noting the particular nature of the utility functions implied by each class of models. These utility assumptions are then directly contrasted to those of microeconomic theory. The new maximum likelihood based procedure, MICROSCALE, is presented, as well as the technical details of the estimation procedure. The results of a Monte Carlo analysis investigating the performance of the algorithm as a number of model, data, and error factors are experimentally manipulated are provided. Finally, an illustration in consumer psychology concerning a convenience sample of thirty consumers providing paired comparisons judgments for some fourteen brands of over-the-counter analgesics is discussed. 相似文献
25.
Restricted multidimensional scaling models [Bentler & Weeks, 1978] allowing constraints on parameters, are extended to the
case of asymmetric data. Separate functions are used to model the symmetric and antisymmetric parts of the data. The approach
is also extended to the case in which data are presumed to be linearly related to squared distances. Examples of several models
are provided, using journal citation data. Possible extensions of the models are considered.
This research was supported in part by USPHS Grant 0A01070, P. M. Bentler, principal investigator, and NIMH Grant MH-24819,
E. J. Anthony and J. Worland, principal investigators.
The authors wish to thank E. W. Holman and several anonymous reviewers for their valuable suggestions concerning this research. 相似文献
26.
When measuring the same variables on different occasions, two procedures for canonical analysis with stationary compositing weights are developed. The first, SUMCOV, maximizes the sum of the covariances of the canonical variates subject to norming constraints. The second, COLLIN, maximizes the largest root of the covariances of the canonical variates subject to norming constraints. A characterization theorem establishes a model building approach. Both methods are extended to allow for Cohort Sequential Designs. Finally a numerical illustration utilizing Nesselroade and Baltes data is presented.The authors wish to thank John Nesselroade for permitting us to use the data whose analysis we present. 相似文献
27.
Percentage agreement measures of interobserver agreement or "reliability" have traditionally been used to summarize observer agreement from studies using interval recording, time-sampling, and trial-scoring data collection procedures. Recent articles disagree on whether to continue using these percentage agreement measures, and on which ones to use, and what to do about chance agreements if their use is continued. Much of the disagreement derives from the need to be reasonably certain we do not accept as evidence of true interobserver agreement those agreement levels which are substantially probable as a result of chance observer agreement. The various percentage agreement measures are shown to be adequate to this task, but easier ways are discussed. Tables are given to permit checking to see if obtained disagreements are unlikely due to chance. Particularly important is the discovery of a simple rule that, when met, makes the tables unnecessary. If reliability checks using 50 or more observation occasions produce 10% or fewer disagreements, for behavior rates from 10% through 90%, the agreement achieved is quite improbably the result of chance agreement. 相似文献
28.
Synthetic data are used to examine how well axiomatic and numerical conjoint measurement methods, individually and comparatively, recover simple polynomial generators in three dimensions. The study illustrates extensions of numerical conjoint measurement (NCM) to identify and model distributive and dual-distributive, in addition to the usual additive, data structures. It was found that while minimum STRESS was the criterion of fit, another statistic, predictive capability, provided a better diagnosis of the known generating model. That NCM methods were able to better identify generating models conflicts with Krantz and Tversky's assertion that, in general, the direct axiom tests provide a more powerful diagnostic test between alternative composition rules than does evaluation of numerical correspondence. For all methods, dual-distributive models are most difficult to recover, while consistent with past studies, the additive model is the most robust of the fitted models.Douglas Emery is now at the Krannert Graduate School of Management, Purdue University, West Lafayette, IN, on leave from the University of Calgary. 相似文献
29.
Anders Christoffersson 《Psychometrika》1977,42(3):433-438
A two-step weighted least squares estimator for multiple factor analysis of dichotomized variables is discussed. The estimator is based on the first and second order joint probabilities. Asymptotic standard errors and a model test are obtained by applying the Jackknife procedure. 相似文献
30.