首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For small and balanced analysis of variance problems, standard computer programs are convenient and efficient. For large problems, regression pro- grams are at least competitive with analysis of variance programs; and, when a problem is unbalanced, they usually provide the only reasonable solution. This paper discusses procedures for using regression programs for the computing of analyses of variance. A procedure for coding matrices is described for experimental designs having nested and crossed factors. Several illustrations are given, and the limitation of the procedure with large repeated measures designs is discussed. A second algorithm is offered for obtaining the sums of squares for nested factors and their interactions in such designs.  相似文献   

2.
We describe a procedure to measure the error of localization on the skin. The procedure, which provides for rapid measurement of the error of localization and rapid analysis of the data, uses a digitizing tablet interfaced with a computer. A photocopy of the part of the body to be tested is placed on the digitizing tablet. The subject localizes the stimulus by touching the pen of the digitizing tablet to the photocopy. The location of the pen contact is stored, and the error of localization is determined by the computer. A graphic representation of each subject’s test area can be stored. Both stimulus and response locations can be displayed on this graphic representation. The procedure also allows the same sites on the skin to be tested over a period of weeks or months.  相似文献   

3.
Two related orthogonal analytic rotation criteria for factor analysis are proposed. Criterion I is based upon the principle that variables which appear on the same factor should be correlated. Criterion II is based upon the principle that variables which are uncorrelated should not appear on the same factor. The recommended procedure is to rotate first by criterion I, eliminate the minor factors, and then rerotate the remaining major factors by criterion II. An example is presented in which this procedure produced a rotational solution very close to expectations whereas a varimax solution exhibited certain distortions. A computer program is provided.  相似文献   

4.
A procedure for evaluating personality is described. Conventional and transposed factor analyses are made from Q sort data describing the important people in the subject's life in terms of his own constructs (á la Kelly) as variables. The scoring procedure produces construct-factors and people-factors. Sorts from a subject illustrate the method. Cross-cultural applications are possible since the translation of personal constructs is not essential. Simulation of relationships to others, SORTO, combines Kelly's (1955) personal constructs with Stephenson's (1953) Q sort procedure. A large amount of personal data is factor analyzed by the computer to reveal the main idiosyncratic features of a subject's perceptions of his relationships to others. Maximum output from the analysis occurs when the nature of personal constructs employed is supplied as input to the analysis.  相似文献   

5.
A unified treatment of the weighting problem   总被引:1,自引:0,他引:1  
A general procedure is described for obtaining weighted linear combinations of variables. This includes as special cases, multiple regression weights, canonical variate analysis, principal components, maximizing composite reliability, canonical factor analysis, and certain other well-known methods. The general procedure is shown to yield certain desirable invariance properties, with respect to transformations of the variables.The author wishes to thank Dr. A. J. Cropley for preparing the necessary computer programs for this study.  相似文献   

6.
A computer model that simulates the patterns of responding of infrahuman subjects under several schedules of reinforcement is described. The model is dynamic in that it continuously assesses the values of several interacting variables that are, in turn, affected by simulated environ-mental events that are scheduled by a procedure program. The data generated by the computer model, including cumulative records, closely conform to reported experimental data. The results indicate that computer simulations are a very useful tool for developing quantitative theories of operant behavior.  相似文献   

7.
A nonparametric item response theory model—the Mokken scale analysis (a stochastic elaboration of the deterministic Guttman scale)—and a computer program that performs this analysis are described. Three procedures of scaling are distinguished: a search procedure, an evaluation of the whole set of items, and an extension of an existing scale. All procedures provide a coefficient of scalability for all items that meet the criteria of the Mokken model and an item coefficient of scalability for every item. Four different types of reliability coefficient are computed both for the entire set of items and for the scalable items. A test of robustness of the found scale can be performed to analyze whether the scale is invariant across different subgroups or samples. This robustness test serves as a goodness of fit test for the established scale. The program is written in FORTRAN 77. Two versions are available, an SPSS-X procedure program (which can be used with the SPSS-X mainframe package) and a stand-alone program suitable for both mainframe and microcomputers.  相似文献   

8.
Power of the likelihood ratio test in covariance structure analysis   总被引:4,自引:0,他引:4  
A procedure for computing the power of the likelihood ratio test used in the context of covariance structure analysis is derived. The procedure uses statistics associated with the standard output of the computer programs commonly used and assumes that a specific alternative value of the parameter vector is specified. Using the noncentral Chi-square distribution, the power of the test is approximated by the asymptotic one for a sequence of local alternatives. The procedure is illustrated by an example. A Monte Carlo experiment also shows how good the approximation is for a specific case.This research was made possible by a grant from the Dutch Organization for Advancement of Pure Research (ZWO). The authors also like to acknowledge the helpful comments and suggestions from the editor and anonymous reviewers.  相似文献   

9.
The area under the rating ROC can be a useful index of stimulus discriminability. It has the advantage that few assumptions must be made about the underlying distributions of signal and noise. One of the assumptions that must be made is the order of the subject’s response scale. A procedure is outlined for determining the order of the response scale actually used by the subject, and some of the implications of reordering the scale used in analysis are discussed. Results of computer simulation of the effects of varying some important experimental parameters are presented.  相似文献   

10.
A computer animation for the calibration of color video projectors is described. The animation facilitates the calibration procedure and can be readily programmed on almost any graphics computer or played from a videotape.  相似文献   

11.
While a rotation procedure currently exists to maximize simultaneously Tucker's coefficient of congruence between corresponding factors of two factor matrices under orthogonal rotation of one factor matrix, only approximate solutions are known for the generalized case where two or more matrices are rotated. A generalization and modification of the existing rotation procedure to simultaneously maximize the congruence is described. An example using four data matrices, comparing the generalized congruence maximization procedure with alternative rotation procedures, is presented. The results show a marked improvement of the obtained congruence using the generalized congruence maximization procedure compared to other procedures, without a significant loss of success with respect to the least squares criterion. A computer program written by the author to perform the rotations is briefly discussed.  相似文献   

12.
A wide variety of complex waveforms can be generated by approximating the desired analog waveform from an array of digital values. Some basic properties of these digital approximations are discussed in terms of pulse amplitude modulation and sampling theory. The waveforms are generated by transferring the digital values to a digital-to-analog converter followed by a low-pass filter. This usually requires the dedicated use of a computer. We have built a device, incorporating solid state memory, that can store, time, and transfer previously computed digital values, so that a computer is no longer necessary to generate the waveforms. Specifications of the digital-to-analog converter and appropriate settings of the filter are discussed, along with a simplified procedure for calculating waveforms that have line spectra. An adaptation of this procedure enables the device to be used as a high-speed programmable pure-tone source.  相似文献   

13.
This paper describes a method for recording couples’ observations of their communications during an ongoing interaction sequence. The procedure allows the recording and decoding of immediate reactions, while maintaining an ecologically valid observation environment. Responses are encoded on a stereo cassette tape player and are decoded by a PDP-11/10 computer with two Schmitt triggers that count the peaks in the tones. The computer provides analysis of the response value and time. Applications for other research problems in the areas of social and applied clinical psychology are discussed.  相似文献   

14.
A split-sample replication criterion originally proposed by J. E. Overall and K. N. Magee (1992) as a stopping rule for hierarchical cluster analysis is applied to multiple data sets generated by sampling with replacement from an original simulated primary data set. An investigation of the validity of this bootstrap procedure was undertaken using different combinations of the true number of latent populations, degrees of overlap, and sample sizes. The bootstrap procedure enhanced the accuracy of identifying the true number of latent populations under virtually all conditions. Increasing the size of the resampled data sets relative to the size of the primary data set further increased accuracy. A computer program to implement the bootstrap stopping rule is made available via a referenced Web site.  相似文献   

15.
Students with developmental disabilities often have difficulty with writing skills such as tracing, copying, and dictation writing. A student with writing difficulties participated in the present study, which used computer‐based teaching applied in the home. We examined whether a student could copy Japanese Kanji characters after training with a constructed response matching‐to‐sample (CRMTS) procedure. The procedure was designed to teach identity Kanji construction. The results showed that the student not only acquired the constructed responses through this procedure but also to spelling generalized to copy trained and untrained Kanji characters. The results are discussed in terms of the effect of the CRMTS procedure on the acquisition and transfer of writing characters and the applicability of computer‐based home teaching. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

16.
孔明  卞冉  张厚粲 《心理科学》2007,30(4):924-925,918
平行分析是探索性因素分析中用来确定所保留的凶子个数的一种方法。探索性因素分析中常用的确定因子个数的方法有特征值大于1准则和碎石图,但这两种方法又各有不足。平行分析则为探索性因素分析中所保留因子个数的确定提供了另一种新思路。本文详细介绍了平行分析的步骤、潜在逻辑以及进行平行分析所用的软件,并通过实例来说明其在探索性因素分析中如何应用。  相似文献   

17.
This paper deals with the determination of optimal weights for points on scoring scales for subjective comparative experiments. A scoring scale with a specific number of points is considered, and it is assumed that verbal or other indications imply an order to the scale points. The optimal spacing for the scale points is obtained in the sense that treatment or item differences are maximized relative to error or within-treatment variation. The method is presented in sufficiently generalized form to be used directly with any experimental design leading to the analysis of variance. An iterative procedure, suitable for computer use, yields the optimal differences among the ordered scale points. Properties of this procedure are discussed.Research sponsored in part by the Statistics Branch, Office of Naval Research, and in part by the Research Center, the General Foods Corporation. Reproduction in whole or in part is permitted for any purpose of the United States Government.  相似文献   

18.
A method of estimating the parameters of the normal ogive model for dichotomously scored item-responses by maximum likelihood is demonstrated. Although the procedure requires numerical integration in order to evaluate the likelihood equations, a computer implemented Newton-Raphson solution is shown to be straightforward in other respects. Empirical tests of the procedure show that the resulting estimates are very similar to those based on a conventional analysis of item difficulties and first factor loadings obtained from the matrix of tetrachoric correlation coefficients. Problems of testing the fit of the model, and of obtaining invariant parameters are discussed.Research reported in this paper was supported by NSF Grant 1025 to the University of Chicago.  相似文献   

19.
Test bias, in contrast to test fairness, is best conceptualized in validity terms amenable to statistical analysis. Evidence of predictive validity may be most salient in many situations. Evaluation of predictive bias is generally operationalized via linear regression. Potthoff (1978) provided an efficient and parsimonious regression bias procedure that allows both simultaneous and separate tests of regression slopes and intercepts across groups. A Macintosh computer program, MacPotthoff, is presented for automated calculation of Potthoff regression bias statistics.  相似文献   

20.
A scale-invariant index of factorial simplicity is proposed as a summary statistic for principal components and factor analysis. The index ranges from zero to one, and attains its maximum when all variables are simple rather than factorially complex. A factor scale-free oblique transformation method is developed to maximize the index. In addition, a new orthogonal rotation procedure is developed. These factor transformation methods are implemented using rapidly convergent computer programs. Observed results indicate that the procedures produce meaningfully simple factor pattern solutions.This investigation was supported in part by a Research Scientist Development Award (K02-DA00017) and research grants (MH24149 and DA01070) from the U. S. Public Health Service. The assistance of Andrew L. Comrey, Henry F. Kaiser, Bonnie Barron, Marion Hee, and several anonymous reviewers is gratefully acknowledged.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号