首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper provides a generalization of the Procrustes problem in which the errors are weighted from the right, or the left, or both. The solution is achieved by having the orthogonality constraint on the transformation be in agreement with the norm of the least squares criterion. This general principle is discussed and illustrated by the mathematics of the weighted orthogonal Procrustes problem.  相似文献   

2.
A family of solutions for linear relations amongk sets of variables is proposed. It is shown how these solutions apply fork=2, and how they can be generalized from there tok3.The family of solutions depends on three independent choices: (i) to what extent a solution may be influenced by differences in variances of components within each set; (ii) to what extent the sets may be differentially weighted with respect to their contribution to the solution—including orthogonality constraints; (iii) whether or not individual sets of variables may be replaced by an orthogonal and unit normalized basis.Solutions are compared with respect to their optimality properties. For each solution the appropriate stationary equations are given. For one example it is shown how the determinantal equation of the stationary equations can be interpreted.  相似文献   

3.
The standard methods for decomposition and analysis of evoked potentials are bandpass filtering, identification of peak amplitudes and latencies, and principal component analysis (PCA). We discuss the limitations of these and other approaches and introduce wavelet packet analysis. Then we propose the "single-channel wavelet packet model," a new approach in which a unique decomposition is achieved using prior time-frequency information and differences in the responses of the components to changes in experimental conditions. Orthogonal sets of wavelet packets allow a parsimonious time-frequency representation of the components. The method allows energy in some wavelet packets to be shared among two or more components, so the components are not necessarily orthogonal. The single-channel wavelet packet model and PCA both require constraints to achieve a unique decomposition. In PCA, however, the constraints are defined by mathematical convenience and may be unrealistic. In the single-channel wavelet packet model, the constraints are based on prior scientific knowledge. We give an application of the method to auditory evoked potentials recorded from cats. The good frequency resolution of wavelet packets allows us to separate superimposed components in these data. Our present approach yields estimates of component waveforms and the effects of experiment conditions on the amplitude of the components. We discuss future extensions that will provide confidence intervals and p values, allow for latency changes, and represent multichannel data.  相似文献   

4.
It has long been thought that degeneracy in unfolding only concerned non‐metric unfolding. Recently, Busing, Groenen, and Heiser have established that degeneracy occurs for all transformations that include estimation of an intercept and a slope. Consequently, degeneracy also plagues metric unfolding, since one member of the metric transformation family, the interval transformation, includes estimation of both an intercept and a slope. In this paper, a simple solution is proposed to the degeneracy problem for metric unfolding by penalizing for an undesirable intercept. An application of this approach will illustrate its potential.  相似文献   

5.
When analyzing data, researchers are often confronted with a model selection problem (e.g., determining the number of components/factors in principal components analysis [PCA]/factor analysis or identifying the most important predictors in a regression analysis). To tackle such a problem, researchers may apply some objective procedure, like parallel analysis in PCA/factor analysis or stepwise selection methods in regression analysis. A drawback of these procedures is that they can only be applied to the model selection problem at hand. An interesting alternative is the CHull model selection procedure, which was originally developed for multiway analysis (e.g., multimode partitioning). However, the key idea behind the CHull procedure—identifying a model that optimally balances model goodness of fit/misfit and model complexity—is quite generic. Therefore, the procedure may also be used when applying many other analysis techniques. The aim of this article is twofold. First, we demonstrate the wide applicability of the CHull method by showing how it can be used to solve various model selection problems in the context of PCA, reduced K-means, best-subset regression, and partial least squares regression. Moreover, a comparison of CHull with standard model selection methods for these problems is performed. Second, we present the CHULL software, which may be downloaded from http://ppw.kuleuven.be/okp/software/CHULL/, to assist the user in applying the CHull procedure.  相似文献   

6.
This article presents a problem‐solving model to examine the often problematic relationship between expertise and creativity. The model has two premises, each the opposite of a common cliché. The first cliché asserts that creativity requires thinking outside‐the‐box. The first premise argues that experts can only think and problem solve inside the tool boxes of their expertise. The second cliché, that creativity requires freedom from constraints, points to the problem with expertise. Free to do anything, experts repeat what has worked best in the past. A solution is suggested by the second premise: to circumvent the liabilities of expertise, creativity requires constraints of a particular paired kind. The model is introduced as an expansion of prior process models focused on problem identification and construction. Problem‐finding is reanalyzed as constraint‐finding. A case study shows how one recognized creator, painter Chuck Close, uses constraints as a tool to solve the expertise‐creativity problem.  相似文献   

7.
Multidimensional unfolding methods suffer from the degeneracy problem in almost all circumstances. Most degeneracies are easily recognized: the solutions are perfect but trivial, characterized by approximately equal distances between points from different sets. A definition of an absolutely degenerate solution is proposed, which makes clear that these solutions only occur when an intercept is present in the transformation function. Many solutions for the degeneracy problem have been proposed and tested, but with little success so far. In this paper, we offer a substantial modification of an approach initiated bythat introduced a normalization factor based on thevariance in the usual least squares loss function. Heiser unpublishedthesis, (1981) and showed that the normalization factor proposed by Kruskal and Carroll was not strong enough to avoid degeneracies. The factor proposed in the present paper, based on the coefficient of variation, discourages or penalizes nonmetric transformations of the proximities with small variation, so that the procedure steers away from solutions with small variation in the interpoint distances. An algorithm is described for minimizing the re-adjusted loss function, based on iterative majorization. The results of a simulation study are discussed, in which the optimal range of the penalty parameters is determined. Two empirical data sets are analyzed by our method, clearly showing the benefits of the proposed loss function.The authors would like to thank the editor, an associate editor, and three reviewers for their valuable comments and suggestions to improve the quality of this work.This revised article was published online in August 2005 with the PDF paginated correctly.  相似文献   

8.
The powered vector method of factor analysis yields directly and without rotation factors satisfying objectives of parsimony, orthogonality, and meaningfulness. The method is objective, computationally efficient, and easily programmed for digital computers. The computational procedure is described. Illustrative analyses are presented. Results of applications of the powered vector method are compared with results obtained using the principal axes solution followed by orthogonal rotation.  相似文献   

9.
The CANDECOMP/PARAFAC (CP) model decomposes a three-way array into a prespecified number of R factors and a residual array by minimizing the sum of squares of the latter. It is well known that an optimal solution for CP need not exist. We show that if an optimal CP solution does not exist, then any sequence of CP factors monotonically decreasing the CP criterion value to its infimum will exhibit the features of a so-called “degeneracy”. That is, the parameter matrices become nearly rank deficient and the Euclidean norm of some factors tends to infinity. We also show that the CP criterion function does attain its infimum if one of the parameter matrices is constrained to be column-wise orthonormal.  相似文献   

10.
Antecedent-strengthening, a trivially valid inference of classical logic of the form: P → Q ? (P & R) → Q, has a counterpart in everyday reasoning that often fails. A plausible solution to the problem involves assuming an implicit ceteris paribus (CP) qualifier that can be explicated as an additional conjunct in the antecedent of the premise. The qualifier can be explicated as ‘everything else relevant remains unchanged’ or alternatively as ‘nothing interferes’. The qualifier appears most prominently in the context of the discussion of laws in the sciences, where these laws are often expressed with a CP qualifier. From an analysis of the qualifier’s role in the problem of antecedent-strengthening, we can learn more about CP qualifiers in general and in their application to the laws used in the sciences.  相似文献   

11.
The other-race effect was examined in a series of experiments and simulations that looked at the relationships among observer ratings of typicality, familiarity, attractiveness, memorability, and the performance variables ofd’ and criterion. Experiment 1 replicated the other-race effect with our Caucasian and Japanese stimuli for both Caucasian and Asian observers. In Experiment 2, we collected ratings from Caucasian observers on the faces used in the recognition task. A Varimax-rotated principal components analysis on the rating and performance data for the Caucasian faces replicated Vokey and Read’s (1992) finding that typicality is composed of two orthogonal components, dissociable via their independent relationships to: (1) attractiveness and familiarity ratings and (2) memorahility ratings. For Japanese faces, however, we fond that typicality was related only to memorahility. Where performance measures were concerned, two additional principal components dominated by criterion and byd’ emerged for Caucasian faces. For the Japanese faces, however, the performance measures ofd’ and criterion merged into a single component that represented a second component of typicality, one orthogonal to thememorability-dominated component. A measure offace representation quality extracted from an autoassociative neural network trained with a majority of Caucasian faces and a minority of Japanese faces was incorporated into the principal components analysis. For both Caucasian and Japanese faces, the neural network measure related both to memorability ratings and to human accuracy measures. Combined, the human data and simulation results indicate that the memorahility component of typicality may be related to small, local, distinctive features, whereas the attractiveness/familiarity component may be more related to the global, shape-based properties of the face.  相似文献   

12.
Three-Mode Component Analysis with Crisp or Fuzzy Partition of Units   总被引:1,自引:0,他引:1  
A new methodology is proposed for the simultaneous reduction of units, variables, and occasions of a three-mode data set. Units are partitioned into a reduced number of classes, while, simultaneously, components for variables and occasions accounting for the largest common information for the classification are identified. The model is a constrained three-mode factor analysis and it can be seen as a generalization of the REDKM model proposed by De Soete and Carroll for two-mode data. The least squares fitting problem is mathematically formalized as a constrained problem in continuous and discrete variables. An iterative alternating least squares algorithm is proposed to give an efficient solution to this minimization problem in the crisp and fuzzy classification context. The performances of the proposed methodology are investigated by a simulation study comparing our model with other competing methodologies. Different procedures for starting the proposed algorithm have also been tested. A discussion of some interesting differences in the results follows. Finally, an application to real data illustrates the ability of the proposed model to provide substantive insights into the data complexities.  相似文献   

13.
14.
Tucker has outlined an application of principal components analysis to a set of learning curves, for the purpose of identifying meaningful dimensions of individual differences in learning tasks. Since the principal components are defined in terms of a statistical criterion (maximum variance accounted for) rather than a substantive one, it is typically desirable to rotate the components to a more interpretable orientation. Simple structure is not a particularly appealing consideration for such a rotation; it is more reasonable to believe that any meaningful factor should form a (locally) smooth curve when the component loadings are plotted against trial number. Accordingly, this paper develops a procedure for transforming an arbitrary set of component reference curves to a new set which are mutually orthogonal and, subject to orthogonality, are as smooth as possible in a well defined (least squares) sense. Potential applications to learning data, electrophysiological responses, and growth data are indicated.Portions of this research were supported by the National Research Council of Canada, Grant A8615 to the second author. We thank Jagdeth Sheth for supplying his raw data.  相似文献   

15.
The Hebrew Tridimensional Personality Questionnaire was administered to over a thousand individuals in the community, 16–78 years of age. Factor analysis was run first on individual items, and then on the 12 sub-scales described by [Cloninger, C.R., Pryzbeck, T.R., & Svrakic, D.M. (1991). The TPQ: US normative data. Psychological Reports, 69, 1047–1051)]. The factor analyses were restricted to four orthogonal factors in order to attempt confirmation of the corrected four-factor solution [Stallings, M.C., Hewitt, J.K., Cloninger, R.C., Heath, A.C., & Eaves, L.J. (1996). Genetic and environmental structure of the TPQ: three or four temperament dimensions? Journal of Personality and Social Psychology, 70 (1), 127–140)]. In the individual item analysis four orthogonal factors recognizable as Novelty Seeking, Reward Dependence, Harm Avoidance and Persistence emerged. However only up to half of the items originally ascribed to each factor loaded sufficiently and exclusively on the appropriate factor. When the 12 sub-scales were entered into factor analysis the four orthogonal factors were produced, and the structure satisfactorily confirmed. A few exceptions to orthogonality were observed. The data were analyzed for sex differences and age effects. Women scored higher than men did on most sub-scales of Harm Avoidance and Reward Dependence. The younger group (up to 21 years of age) scored higher on Novelty Seeking and Reward Dependence and lower on Harm Avoidance than the older group, but no sex by age interaction was detected. Preliminary normative Israeli data are supplied, and implications of the group differences discussed.  相似文献   

16.
The aggregation of consistent individual judgments on logically interconnected propositions into a collective judgment on those propositions has recently drawn much attention. Seemingly reasonable aggregation procedures, such as propositionwise majority voting, cannot ensure an equally consistent collective conclusion. In this paper, we motivate that quite often, we do not only want to make a factually right decision, but also to correctly evaluate the reasons for that decision. In other words, we address the problem of tracking the truth. We set up a probabilistic model that generalizes the analysis of Bovens and Rabinowicz (Synthese 150: 131?C153, 2006) and use it to compare several aggregation procedures. Demanding some reasonable adequacy constraints, we demonstrate that a reasons- or premise-based aggregation procedure tracks the truth better than any other procedure. However, we also illuminate that such a procedure is not in all circumstances easy to implement, leaving actual decision-makers with a tradeoff problem.  相似文献   

17.
We analyzed data from 87 mothers of children ages 15 to 44 months with cerebral palsy (CP) or no diagnosis, who completed the Dyadic Adjustment Scale, Parenting Stress Index, Support Functions Scale, and Inventory of Social Support. Principal components analysis of the 15 subscales from the 5 measures revealed few cross-measure loadings. Mothers of children with CP (severe or mild) reported higher levels of parenting stress than did mothers of controls. However, cluster analysis of self-report measures yielded a 5-cluster solution, with no diagnostic group differences across clusters. That is, there were no overall differences in self-reported family functioning according to presence or severity of the child's disability. The results are discussed in terms of the organization of family systems and their relationship to child diagnosis. Clinical implications for assessing and working with families are noted.  相似文献   

18.
In the design of real-time systems, it is often the case that certain process parameters, such as the execution time of a job are not known precisely. The challenge in real-time system design then, is to develop techniques that efficiently meet the requirements of impreciseness, while simultaneously guaranteeing safety. In a traditional scheduling model, such as the one discussed in [M. Pinedo, Scheduling: Theory, Algorithms, and Systems, Prentice-Hall, Englewood Cliffs, 1995. [23]]; [P. Brucker, Scheduling Algorithms, second ed., Springer, 1998. [3]], the tendency is to either overlook the effects of impreciseness or to simplify the issue of impreciseness by assuming worst-case values. This assumption is unrealistic and at the same time, may cause certain timing constraints to be violated at run-time. Further, depending on the nature of the constraints involved, it is not immediately apparent, what the worst-case value for a given parameter is. Whereas, in traditional scheduling, constraints among jobs are no more complex than those that can be represented by a precedence graph, in case of real-time scheduling, complicated constraints such as relative timing constraints are commonplace. Additionally, the purpose of scheduling is to achieve a schedule that optimizes some performance metric, whereas in real-time scheduling the goal is to ensure that the imposed constraints are met at run-time. In this paper, we study the problem of scheduling a set of ordered, non-preemptive jobs under non-constant execution times. Typical applications for variable execution time scheduling include process scheduling in Real-Time Operating Systems such as Maruti, compiler scheduling, database transaction scheduling and automated machine control. An important feature of application areas such as robotics is the interaction between execution times of various processes. We explicitly model this interaction through the representation of execution time vectors as points in convex sets. This modeling vastly extends previous models of execution times as either single points or range-bound intervals. Our algorithms do not assume any knowledge of the distributions of execution times, i.e. they are Zero-Clairvoyant. We present both sequential and parallel algorithms for determining the existence of a Zero-Clairvoyant schedule. To the best of our knowledge, our techniques are the first of their kind.  相似文献   

19.
Kroonenberg and de Leeuw (1980) have developed an alternating least-squares method TUCKALS-3 as a solution for Tucker's three-way principal components model. The present paper offers some additional features of their method. Starting from a reanalysis of Tucker's problem in terms of a rank-constrained regression problem, it is shown that the fitted sum of squares in TUCKALS-3 can be partitioned according to elements of each mode of the three-way data matrix. An upper bound to the total fitted sum of squares is derived. Finally, a special case of TUCKALS-3 is related to the Carroll/Harshman CANDECOMP/PARAFAC model.  相似文献   

20.
The Candecomp/Parafac (CP) method decomposes a three-way array into a prespecified number R of rank-1 arrays, by minimizing the sum of squares of the residual array. The practical use of CP is sometimes complicated by the occurrence of so-called degenerate sequences of solutions, in which several rank-1 arrays become highly correlated in all three modes and some elements of the rank-1 arrays become arbitrarily large. We consider the real-valued CP decomposition of all known three-sliced arrays, i.e., of size p×q×3, with a two-valued typical rank. These are the 5×3×3 and 8×4×3 arrays, and the 3×3×4 and 3×3×5 arrays with symmetric 3×3 slices. In the latter two cases, CP is equivalent to the Indscal model. For a typical rank of {m,m+1}, we consider the CP decomposition with R=m of an array of rank m+1. We show that (in most cases) the CP objective function does not have a minimum but an infimum. Moreover, any sequence of feasible CP solutions in which the objective value approaches the infimum will become degenerate. We use the tools developed in Stegeman (2006), who considers p×p×2 arrays, and present a framework of analysis which is of use to the future study of CP degeneracy related to a two-valued typical rank. Moreover, our examples show that CP uniqueness is not necessary for degenerate solutions to occur. The author is supported by the Dutch Organisation for Scientific Research (NWO), VENI grant 451-04-102.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号