首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A general formulation of the power law is presented which has two special features: (1) negative exponents are admissible; and (2) the log law is a special limiting case. Estimation procedures, which provide joint estimates of the exponent and the absolute threshold, are derived for the direct ratio scaling methods. A solution is provided for theaveraging problem for ratio production and bisection scaling, two methods generating observations on the physical scale, and Monte Carlo methods are used to evaluate the resulting estimators.  相似文献   

2.
An examination of the determinantal equation associated with Rao's canonical factors suggests that Guttman's best lower bound for the number of common factors corresponds to the number of positive canonical correlations when squared multiple correlations are used as the initial estimates of communality. When these initial communality estimates are used, solving Rao's determinantal equation (at the first stage) permits expressing several matrices as functions of factors that differ only in the scale of their columns; these matrices include the correlation matrix with units in the diagonal, the correlation matrix with squared multiple correlations as communality estimates, Guttman's image covariance matrix, and Guttman's anti-image covariance matrix. Further, the factor scores associated with these factors can be shown to be either identical or simply related by a scale change. Implications for practice are discussed, and a computing scheme which would lead to an exhaustive analysis of the data with several optional outputs is outlined.  相似文献   

3.
The paper describes a solution for scale values in successive intervals scaling which does not assume equal covariances and variances. A more restrictive distribution assumption is made, however. Advantages and disadvantages of the method are discussed in relation to the two available conventional scaling techniques for scaling with unequal variances.  相似文献   

4.
The linguistic performance of 120 aphasic patients of the four standard syndromes assessed by the Aachen Aphasia Test (AAT) is analyzed by a nonmetric (ordinal) multidimensional scaling procedure (Smallest Space Analysis, SSA1). The linguistic structure of the test items is characterized within the framework of L. Guttman's facet theory. Three systematic components (facets) are discerned: linguistic modality, unit, and regularity. Properties of the facets as well as their relations are assessed and tested empirically by analyzing the interrelations among different items or sets of items. The spatial configurations obtained by the scaling procedure fit only partially the expectations derived from the facet-theory model. The modality facet was found to have a strong overriding influence on the aphasic test performance. The facets unit and regularity were only found for the most rigorously designed subtests, Written Language and Comprehension. The results suggest the introduction of a new combined facet linguistic complexity which reflects the dependency of the facets regularity and unit.  相似文献   

5.
Goodman contributed to the theory of scaling by including a category of intrinsically unscalable respondents in addition to the usual scale-type respondents. However, his formulation permits only error-free responses by respondents from the scale types. This paper presents new scaling models which have the properties that: (1) respondents in the scale types are subject to response errors; (2) a test of significance can be constructed to assist in deciding on the necessity for including an intrinsically unscalable class in the model; and (3) when an intrinsically unscalable class is not needed to explain the data, the model reduces to a probabilistic, rather than to a deterministic, form. Three data sets are analyzed with the new models and are used to illustrate stages of hypothesis testing.  相似文献   

6.
McGrath RE 《Journal of personality assessment》2004,83(2):128-30; discussion 136-40
Hofstee and Ten Berge (2004/this issue) outline a method of scale transformation that places scores on a common absolute scale. This contrasts with traditional relative methods of transformation, which involve scaling in relation to a sample mean. Their primary intention seems to be to produce a scale that is intrinsically meaningful. This issue of scale meaning is discussed in some detail, including reference to an alternate approach to absolute scaling offered by Cohen, Cohen, Aiken, and West (1999). Ultimately, neither approach to absolute scaling seems completely satisfactory as a resolution to this problem. It is suggested that the lack of meaning inherent to many psychosocial measures is a natural product of traditional aggregative practices in scale development and may be invulnerable to statistical correction.  相似文献   

7.
Conditions are given under which the stationary points and values of a ratio of quadratic forms in two singular matrices can be obtained by a series of simple matrix operations. It is shown that two classes of optimal weighting problems, based respectively on the grouping of variables and on the grouping of observations, satisfy these conditions. The classical treatment of optimal scaling of forced-choice multicategory data is extended for these cases. It is shown that previously suggested methods based on reparameterization will work only under very special conditions.  相似文献   

8.
We describe a multivariate model for a certain class of discrimination methods in this paper and discuss a multivariate Euclidean model for a particular method, the triangular method. The methods of interest involve the selection or grouping of stimuli drawn from two stimulus sets on the basis of attributes invoked by the subject. These methods are commonly used for estimation and hypothesis testing concerning possible differences between foods, beverages, odorants, tastants and visual stimuli.Mathematical formulation of the bivariate model for the triangular method is provided as well as extensive Monte Carlo results for up to 10-dimensional cases. The effect of correlation structure and variance inequality are discussed. Results from these methods (as probability of a correct response) are not monotonically related to the distance between the means of the stimulus sets from which the stimuli are drawn but depend in a particular way on dimensionality, correlation structure, and the relative orientation of the momentary sensory values in a multidimensional space. The importance of these results to the validity of these methods as currently employed is discussed and the possibility of developing a new approach to multidimensional scaling on the basis of this new theory is considered.  相似文献   

9.
This article argues for a task-based approach to identifying and individuating cognitive systems. The agent-based extended cognition approach faces a problem of cognitive bloat and has difficulty accommodating both sub-individual cognitive systems (“scaling down”) and some supra-individual cognitive systems (“scaling up”). The standard distributed cognition approach can accommodate a wider variety of supra-individual systems but likewise has difficulties with sub-individual systems and faces the problem of cognitive bloat. We develop a task-based variant of distributed cognition designed to scale up and down smoothly while providing a principled means of avoiding cognitive bloat. The advantages of the task-based approach are illustrated by means of two parallel case studies: re-representation in the human visual system and in a biomedical engineering laboratory.  相似文献   

10.
A case of the law of comparative judgment which does not assume equal variances and covariances has been applied in the scaling of conservatism from seven attitude scale items. The scale values are in good agreement with those computed from previously known cases. There is also some covariation between comparatal dispersions from Case IV and the new case.  相似文献   

11.
In a paired comparison experiment 783 subjects judged five different brands of champagne (three normal and two alcohol-reduced). Each subject judged only one single pair with respect to which one tasted more fizzy ("spritziger"), dry ("trockener"), prickling ("prickelnder") and better ("besser"). Three extended versions of the Bradley-Terry-Luce model are discussed and used to assess scale values for the criteria as well as for order effects. The results can be summarized into three points: (1) Goodness-of-fit for the simple BTL-model is satisfactory for all criteria--except for the judgement "tastes better than". (2) Using four graded response categories instead of dichotomous responses decreases goodness-of-fit considerably. (3) Alcohol-reduced brands are less "dry", but are quite within the range of the other brands with respect to the remaining criteria. It is argued that the particular scaling method used is especially useful for deciding which criteria are appropriate for measurement on a one-dimensional scale and which are not.  相似文献   

12.
The scaling method used by von Fieandt, Ahonen & Järvinen (1964) for measuring color constancy is discussed. It is argued that (i) the underlying scaling model and the experimental procedure they employed are incompatible, and that (ii) the analytical procedure they applied to the data for obtaining the empirical scale values is incorrect. The data are reanalyzed, and the goodness of fit of both the original and the present results is evaluated.  相似文献   

13.
COMREY AL 《Psychometrika》1950,15(3):317-325
A method of ratio scaling is described for treating comparative judgments of paired stimuli. A method of comparative judgment developed by Metfessel is employed. Formulas for scale values and the solution of a sample problem are provided. The method is designed to provide internal-consistency checks on the scale values. Experimental interpretations of equal-unit and ratio properties of measurement scales are inherent in the procedure.The author wishes to express his appreciation to Professor J. P. Guilford for his helpful comments and encouragement in connection with the preparation of this article.  相似文献   

14.
Ordinal data occur frequently in the social sciences. When applying principal component analysis (PCA), however, those data are often treated as numeric, implying linear relationships between the variables at hand; alternatively, non-linear PCA is applied where the obtained quantifications are sometimes hard to interpret. Non-linear PCA for categorical data, also called optimal scoring/scaling, constructs new variables by assigning numerical values to categories such that the proportion of variance in those new variables that is explained by a predefined number of principal components (PCs) is maximized. We propose a penalized version of non-linear PCA for ordinal variables that is a smoothed intermediate between standard PCA on category labels and non-linear PCA as used so far. The new approach is by no means limited to monotonic effects and offers both better interpretability of the non-linear transformation of the category labels and better performance on validation data than unpenalized non-linear PCA and/or standard linear PCA. In particular, an application of penalized optimal scaling to ordinal data as given with the International Classification of Functioning, Disability and Health (ICF) is provided.  相似文献   

15.
There are two well-known methods for obtaining a guaranteed globally optimal solution to the problem of least-squares unidimensional scaling of a symmetric dissimilarity matrix: (a) dynamic programming, and (b) branch-and-bound. Dynamic programming is generally more efficient than branch-and-bound, but the former is limited to matrices with approximately 26 or fewer objects because of computer memory limitations. We present some new branch-and-bound procedures that improve computational efficiency, and enable guaranteed globally optimal solutions to be obtained for matrices with up to 35 objects. Experimental tests were conducted to compare the relative performances of the new procedures, a previously published branch-and-bound algorithm, and a dynamic programming solution strategy. These experiments, which included both synthetic and empirical dissimilarity matrices, yielded the following findings: (a) the new branch-and-bound procedures were often drastically more efficient than the previously published branch-and-bound algorithm, (b) when computationally feasible, the dynamic programming approach was more efficient than each of the branch-and-bound procedures, and (c) the new branch-and-bound procedures require minimal computer memory and can provide optimal solutions for matrices that are too large for dynamic programming implementation.The authors gratefully acknowledge the helpful comments of three anonymous reviewers and the Editor. We especially thank Larry Hubert and one of the reviewers for providing us with the MATLAB files for optimal and heuristic least-squares unidimensional scaling methods.This revised article was published online in June 2005 with all corrections incorporated.  相似文献   

16.
In restricted statistical models, since the first derivatives of the likelihood displacement are often nonzero, the commonly adopted formulation for local influence analysis is not appropriate. However, there are two kinds of model restrictions in which the first derivatives of the likelihood displacement are still zero. General formulas for assessing local influence under these restrictions are derived and applied to factor analysis as the usually used restriction in factor analysis satisfies the conditions. Various influence schemes are introduced and a comparison to the influence function approach is discussed. It is also shown that local influence for factor analysis is invariant to the scale of the data and is independent of the rotation of the factor loadings. The authors are most grateful to the referees, the Associate Editor, and the Editor for helpful suggestions for improving the clarity of the paper.  相似文献   

17.
Consider the typical problem in individual scaling, namely finding a common configuration and weights for each individual from the given interpoint distances or scalar products. Within the STRAIN framework it is shown that the problem of determining weights for a given configuration can be posed as a standard quadratic programming problem. A set of necessary conditions for an optimal configuration to satisfy are given. A closed form expression for the configuration is obtained for the one dimensional case and an approach is given for the two dimensional case.  相似文献   

18.
The bisection method of animal psychophysical scaling was examined as a measurement procedure. The critical assumptions of bisection scaling, as described by Pfanzagl (1968), were tested to determine if a valid equal-interval scale could be derived. A valid scale was derived in which loudness for the rat (Rattus norvegicus; n = 13) was a power function of sound pressure for 4-kHz tones. Masking noise reduced the discriminability of tonal stimuli but did not affect the bisection point. This result is consistent with an interval scale representation of loudness and demonstrates scale meaningfulness. Loudness bisection data that have been reported in the literature for 3 species (humans, rats, and pigeons) are in substantial agreement with our results.  相似文献   

19.
For purposes of selection and classification there are two general reasons for scaling the mean and variance of the utility of performance across jobs. First, if differential utility across jobs does exist, then the payoff from a selection and classification system will be enhanced to the extent that accurate utility values are incorporated in the assignment system. Second, a valid utility metric would permit a more meaningful comparison of the gains achieved by alternative selection and classification procedures. It is argued in this paper that the Army context, and perhaps others, precludes using the dollar metric and estimates of SDy in dollars. Consequently, Project A conducted a relatively long series of exploratory workshops with Army personnel to (a) define the utility issue, (b) pilot test a wide variety of possible scaling methods, and (c) evaluate the methods that seemed most appropriate. On the basis of exploratory analysis, a combined procedure incorporating both an interval estimation and a ratio estimation method was used to estimate the utility of five different performance levels for each of 276 jobs (MOS) in the enlisted personnel system. The psychometric properties of the resulting scale values are analyzed and discussed.  相似文献   

20.
W. A. Gibson 《Psychometrika》1953,18(2):111-113
Guttman's scalogram board technique for reordering the columns and rows of a matrix is described and its disadvantages are pointed out. A simple and inexpensive procedure for doing the same job without these disadvantages is outlined.I am grateful to Professor Jozef Cohen of the University of Illinois for a five-minute conversation which greatly simplified the procedure described here.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号