首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   672篇
  免费   59篇
  国内免费   45篇
  2024年   3篇
  2023年   27篇
  2022年   26篇
  2021年   33篇
  2020年   49篇
  2019年   42篇
  2018年   33篇
  2017年   42篇
  2016年   45篇
  2015年   15篇
  2014年   23篇
  2013年   61篇
  2012年   11篇
  2011年   12篇
  2010年   9篇
  2009年   18篇
  2008年   16篇
  2007年   20篇
  2006年   21篇
  2005年   21篇
  2004年   16篇
  2003年   13篇
  2002年   20篇
  2001年   10篇
  2000年   11篇
  1999年   6篇
  1998年   9篇
  1997年   6篇
  1996年   7篇
  1995年   18篇
  1994年   4篇
  1993年   8篇
  1992年   6篇
  1991年   7篇
  1990年   6篇
  1989年   13篇
  1988年   9篇
  1987年   5篇
  1986年   5篇
  1985年   4篇
  1984年   3篇
  1983年   6篇
  1982年   6篇
  1981年   3篇
  1980年   4篇
  1979年   15篇
  1978年   9篇
  1977年   8篇
  1976年   7篇
  1974年   2篇
排序方式: 共有776条查询结果,搜索用时 125 毫秒
21.
Results of 1,579 observations of cars entering or exiting campus parking lots showed direct relationships between seat belt wearing and the intrusiveness of the engineering device designed to induce belt usage, and between device intrusiveness and system defeat. For example, all drivers with working interlocks or unlimited buzzer reminders were wearing a seat belt; but 62% of the systems with interlocks or unlimited buzzers had been defeated, and only 15.9% of the drivers in these cars were wearing a seat belt. The normative data indicated marked ineffectiveness of the negative reinforcement contingencies implied by current seat belt inducement systems; but suggested that unlimited buzzer systems would be the optimal system currently available if contingencies were developed to discourage the disconnection and circumvention of such systems. Positive reinforcement strategies are discussed that would be quite feasible for large-scale promotion of seat belt usage.  相似文献   
22.
Sik-Yum Lee 《Psychometrika》1981,46(2):153-160
Confirmatory factor analysis is considered from a Bayesian viewpoint, in which prior information on parameter is incorporated in the analysis. An iterative algorithm is developed to obtain the Bayes estimates. A numerical example based on longitudinal data is presented. A simulation study is designed to compare the Bayesian approach with the maximum likelihood method.Computer facilities were provided by the Computer Services Center, The Chinese University of Hong Kong.  相似文献   
23.
A problem arises in analyzing the existence of interdependence between the behavioral sequences of two individuals: tests involving a statistic such as chi-square assume independent observations within each behavioral sequence, a condition which may not exist in actual practice. Using Monte Carlo simulations of binomial data sequences, we found that the use of a chi-square test frequently results in unacceptable Type I error rates when the data sequences are autocorrelated. We compared these results to those from two other methods designed specifically for testing for intersequence independence in the presence of intrasequence autocorrelation. The first method directly tests the intersequence correlation using an approximation of the variance of the intersequence correlation estimated from the sample autocorrelations. The second method uses tables of critical values of the intersequence correlation computed by Nakamuraet al. (J. Am. Stat. Assoc., 1976,71, 214–222). Although these methods were originally designed for normally distributed data, we found that both methods produced much better results than the uncorrected chi-square test when applied to binomial autocorrelated sequences. The superior method appears to be the variance approximation method, which resulted in Type I error rates that were generally less than or equal to 5% when the level of significance was set at .05.  相似文献   
24.
Consider an old testX consisting ofs sections and two new testsY andZ similar toX consisting ofp andq sections respectively. All subjects are given testX plus two variable sections from either testY orZ. Different pairings of variable sections are given to each subsample of subjects. We present a method of estimating the covariance matrix of the combined test (X 1, ...,X s ,Y 1, ...,Y p ,Z 1, ...,Z q ) and describe an application of these estimation techniques to linear, observed-score, test equating.The author is indebted to Paul W. Holland and Donald B. Rubin for their encouragement and many helpful comments and suggestions that contributed significantly to the development of this paper.This research was supported by the Program Statistics Research Project of the ETS Research Statistics Group.  相似文献   
25.
We assume that a judge's task is to categorize each ofN subjects into one ofr known classes. The design of primary interest is employed if the judge is presented withs groups, each containingr subjects, such that each group of sizer consists of exactly one subject of each of ther types. The probability distribution for the total number of correct choices is developed and used to test the null hypothesis that the judge is guessing in favor of the alternative that he or she is operating at a better than chance level. The power of the procedure is shown to be superior to two other procedures which appear in the literature.The authors are grateful for the suggestions of the referees and for computer funding provided by the Northeast Regional Data Center at the University of Florida.  相似文献   
26.
The vast majority of existing multidimensional scaling (MDS) procedures devised for the analysis of paired comparison preference/choice judgments are typically based on either scalar product (i.e., vector) or unfolding (i.e., ideal-point) models. Such methods tend to ignore many of the essential components of microeconomic theory including convex indifference curves, constrained utility maximization, demand functions, et cetera. This paper presents a new stochastic MDS procedure called MICROSCALE that attempts to operationalize many of these traditional microeconomic concepts. First, we briefly review several existing MDS models that operate on paired comparisons data, noting the particular nature of the utility functions implied by each class of models. These utility assumptions are then directly contrasted to those of microeconomic theory. The new maximum likelihood based procedure, MICROSCALE, is presented, as well as the technical details of the estimation procedure. The results of a Monte Carlo analysis investigating the performance of the algorithm as a number of model, data, and error factors are experimentally manipulated are provided. Finally, an illustration in consumer psychology concerning a convenience sample of thirty consumers providing paired comparisons judgments for some fourteen brands of over-the-counter analgesics is discussed.  相似文献   
27.
Restricted multidimensional scaling models for asymmetric proximities   总被引:1,自引:0,他引:1  
Restricted multidimensional scaling models [Bentler & Weeks, 1978] allowing constraints on parameters, are extended to the case of asymmetric data. Separate functions are used to model the symmetric and antisymmetric parts of the data. The approach is also extended to the case in which data are presumed to be linearly related to squared distances. Examples of several models are provided, using journal citation data. Possible extensions of the models are considered. This research was supported in part by USPHS Grant 0A01070, P. M. Bentler, principal investigator, and NIMH Grant MH-24819, E. J. Anthony and J. Worland, principal investigators. The authors wish to thank E. W. Holman and several anonymous reviewers for their valuable suggestions concerning this research.  相似文献   
28.
When measuring the same variables on different occasions, two procedures for canonical analysis with stationary compositing weights are developed. The first, SUMCOV, maximizes the sum of the covariances of the canonical variates subject to norming constraints. The second, COLLIN, maximizes the largest root of the covariances of the canonical variates subject to norming constraints. A characterization theorem establishes a model building approach. Both methods are extended to allow for Cohort Sequential Designs. Finally a numerical illustration utilizing Nesselroade and Baltes data is presented.The authors wish to thank John Nesselroade for permitting us to use the data whose analysis we present.  相似文献   
29.
Percentage agreement measures of interobserver agreement or "reliability" have traditionally been used to summarize observer agreement from studies using interval recording, time-sampling, and trial-scoring data collection procedures. Recent articles disagree on whether to continue using these percentage agreement measures, and on which ones to use, and what to do about chance agreements if their use is continued. Much of the disagreement derives from the need to be reasonably certain we do not accept as evidence of true interobserver agreement those agreement levels which are substantially probable as a result of chance observer agreement. The various percentage agreement measures are shown to be adequate to this task, but easier ways are discussed. Tables are given to permit checking to see if obtained disagreements are unlikely due to chance. Particularly important is the discovery of a simple rule that, when met, makes the tables unnecessary. If reliability checks using 50 or more observation occasions produce 10% or fewer disagreements, for behavior rates from 10% through 90%, the agreement achieved is quite improbably the result of chance agreement.  相似文献   
30.
Synthetic data are used to examine how well axiomatic and numerical conjoint measurement methods, individually and comparatively, recover simple polynomial generators in three dimensions. The study illustrates extensions of numerical conjoint measurement (NCM) to identify and model distributive and dual-distributive, in addition to the usual additive, data structures. It was found that while minimum STRESS was the criterion of fit, another statistic, predictive capability, provided a better diagnosis of the known generating model. That NCM methods were able to better identify generating models conflicts with Krantz and Tversky's assertion that, in general, the direct axiom tests provide a more powerful diagnostic test between alternative composition rules than does evaluation of numerical correspondence. For all methods, dual-distributive models are most difficult to recover, while consistent with past studies, the additive model is the most robust of the fitted models.Douglas Emery is now at the Krannert Graduate School of Management, Purdue University, West Lafayette, IN, on leave from the University of Calgary.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号