首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In many areas of psychology researchers compare the output of pairs of people with people working individually. This is done by calculating estimates for nominal groups, the output of two individuals if they had worked together. The way this is often done is by creating a single set of pairs either randomly or based on their location in a data file. This paper shows that this approach introduces unnecessary error. Two alternatives are developed and described. The first calculates statistics for all permissible sets of pairs. Unfortunately the number of sets is too large for modern computers for moderate sample sizes. The second alternative calculates statistics on all possible pairs. Several simulations are reported which show that both methods provide good estimates for the mean and trimmed mean. However, the all pairs procedure provides a biased estimate of the variance. Based on simulations, an adjustment is recommended for estimating the variance. Functions in S-Plus/R are provided in an appendix and are available from the author's Web page along with updates and alternatives (www.sussex.ac.uk/users/danw/s-plus/ngstats.htm).  相似文献   

2.
Repeated measures designs have been widely employed in psychological experimentation, however, such designs have rarely been analyzed by means of permutation procedures. In the present paper certain aspects of hypothesis tests ina particular repeated measures design (one non-repeated factor (A) and one repeated factor (B) withK subjects per level ofA) were investigated by means of permutation rather than sampling processes. The empirical size and power of certain normal theoryF-tests obtained under permutation were compared to their nominal normal theory values. Data sets were established in which various combinations of kurtosis of subject means and intra-subject variance heterogeneity existed in order that their effect upon the agreement of these two models could be ascertained. The results indicated that except in cases of high intra-subject variance heterogeneity, the usualF-tests onB andAB exhibited approximately the same size and power characteristics whether based upon a permutation or normal theory sampling basis.This research prepared under Contract No. 2593 from the Cooperative Research Branch of the U. S. Office of Education.  相似文献   

3.
Perhaps the most common criterion for partitioning a data set is the minimization of the within-cluster sums of squared deviation from cluster centroids. Although optimal solution procedures for within-cluster sums of squares (WCSS) partitioning are computationally feasible for small data sets, heuristic procedures are required for most practical applications in the behavioral sciences. We compared the performances of nine prominent heuristic procedures for WCSS partitioning across 324 simulated data sets representative of a broad spectrum of test conditions. Performance comparisons focused on both percentage deviation from the “best-found” WCSS values, as well as recovery of true cluster structure. A real-coded genetic algorithm and variable neighborhood search heuristic were the most effective methods; however, a straightforward two-stage heuristic algorithm, HK-means, also yielded exceptional performance. A follow-up experiment using 13 empirical data sets from the clustering literature generally supported the results of the experiment using simulated data. Our findings have important implications for behavioral science researchers, whose theoretical conclusions could be adversely affected by poor algorithmic performances.  相似文献   

4.
Too often, psychological debates become polarized into dichotomous positions. Such polarization may have occurred with respect to Campbell's (1960) blind variation and selective retention (BVSR) theory of creativity. To resolve this unnecessary controversy, BVSR was radically reformulated with respect to creative problem solving. The reformulation began by defining (a) potential solution sets consisting of k possible solutions each described by their respective probability and utility values, (b) a set sightedness metric that gauges the extent to which the probabilities correspond to the utilities, and (c) a solution creativity index based on the joint improbability and utility of each solution. These definitions are then applied to representative cases in which simultaneous or sequential generate‐and‐test procedures scrutinize solution sets of variable size and with representative patterns of probabilities and utilities. The principal features of BVSR theory were then derived, including the implications of superfluity and backtracking. Critically, it was formally demonstrated that the most creative solutions must emerge from solution sets that score extremely low in sightedness. Although this preliminary revision has ample room for further development, the demonstration proves that BVSR's explanatory value does not depend on any specious association with Darwin's theory of evolution.  相似文献   

5.
As a procedure for handling missing data, Multiple imputation consists of estimating the missing data multiple times to create several complete versions of an incomplete data set. All these data sets are analyzed by the same statistical procedure, and the results are pooled for interpretation. So far, no explicit rules for pooling F tests of (repeated-measures) analysis of variance have been defined. In this article we outline the appropriate procedure for the results of analysis of variance (ANOVA) for multiply imputed data sets. It involves both reformulation of the ANOVA model as a regression model using effect coding of the predictors and applying already existing combination rules for regression models. The proposed procedure is illustrated using 3 example data sets. The pooled results of these 3 examples provide plausible F and p values.  相似文献   

6.
In this article we are concerned with the situation where one is estimating the outcome of a variable Y, with nominal measurement, on the basis of the outcomes of several predictor variables, X 1, X 2, ..., X r, each with nominal measurement. We assume that we have a random sample from the population. Here we are interested in estimating p, the probability of successfully predicting a new Y from the population, given the X measurements for this new observation. We begin by proposing an estimator, pa, which is the success rate in predicting Y from the current sample. We show that this estimator is always biased upwards. We then propose a second estimator, pb, which divides the original sample into two groups, a holdout group and a training group, in order to estimate p. We show that procedures such as these are always biased downwards, no matter how we divide the original sample into the two groups. Because one of these estimators tends to overestimate p while the other tends to underestimate p, we propose as a heuristic solution to use the mean of these two estimators, pc, as an estimator for p. We then perform several simulation studies to compare the three estimators with respect to both bias and MSE. These simulations seem to confirm that $ p c is a better estimator than either of the other two.  相似文献   

7.
Comparing the variances of dependent groups   总被引:1,自引:0,他引:1  
Recently several new attempts have been made to find a robust method for comparing the variances ofJ dependent random variables. However, empirical studies have shown that all of these procedures can give unsatisfactory results. This paper examines several new procedures that are derived heuristically. One of these procedures was found to perform better than all of the robust procedures studied here, and so it is recommended for general use.The author would like to thank the reviewers for their very helpful comments on an earlier draft of this paper.  相似文献   

8.
Although taboo words are used to study emotional memory and attention, no easily accessible normative data are available that compare taboo, emotionally valenced, and emotionally neutral words on the same scales. Frequency, inappropriateness, valence, arousal, and imageability ratings for taboo, emotionally valenced, and emotionally neutral words were made by 78 native-English-speaking college students from a large metropolitan university. The valenced set comprised both positive and negative words, and the emotionally neutral set comprised category-related and category-unrelated words. To account for influences of demand characteristics and personality factors on the ratings, frequency and inappropriateness measures were decomposed into raters’ personal reactions to the words versus raters’ perceptions of societal reactions to the words (personal use vs. familiarity and offensiveness vs. tabooness, respectively). Although all word sets were rated higher in familiarity and tabooness than in personal use and offensiveness, these differences were most pronounced for the taboo set. In terms of valence, the taboo set was most similar to the negative set, although it yielded higher arousal ratings than did either valenced set. Imageability for the taboo set was comparable to that of both valenced sets. The ratings of each word are presented for all participants as well as for single-sex groups. The inadequacies of the application of normative data to research that uses emotional words and the conceptualization of taboo words as a coherent category are discussed. Materials associated with this article may be accessed at the Psychonomic Society’s Archive of Norms, Stimuli, and Data, www.psychonomic.org/archive.  相似文献   

9.
The reports of many creative individuals suggest the use of mental imagery in scientific and artistic production. A variety of protocols have tested the association between mental imagery and creativity, but the individual differences approach has been most frequently employed. This approach is assessed here through a range of meta‐analytic tests. Database searches revealed 18 papers employing the individual differences approach that were subjected to a conservative set of selection criteria. Nine studies (1,494 participants) were included in the final analyses. A marginal, but statistically significant, Fisher's Z‐transformed correlation coefficient was revealed. Further analyses showed little difference between form and type of self‐reported imagery and divergent thinking. Explanations for the failure to account for more than 3% of the variance in the data sets are discussed in the context of anecdotal reports, task validity, and design problems.  相似文献   

10.
Exploratory factor analysis is a popular statistical technique used in communication research. Although exploratory factor analysis (EFA) and principal components analysis (PCA) are different techniques, PCA is often employed incorrectly to reveal latent constructs (i.e., factors) of observed variables, which is the purpose of EFA. PCA is more appropriate for reducing measured variables into a smaller set of variables (i.e., components) by keeping as much variance as possible out of the total variance in the measured variables. Furthermore, the popular use of varimax rotation raises some concerns about the relationships among the factors that researchers claim to discover. This paper discusses the distinct purposes of PCA and EFA, using two data sets as examples to highlight the differences in results between these procedures, and also reviews the use of each technique in three major communication journals: Communication Monographs, Human Communication Research, and Communication Research.  相似文献   

11.
A commonly used method of estimating population sensitivity is the so‐called averaged d ′ method. In this method, the arithmetic mean of a set of individual d ′ is usually taken as a population sensitivity estimator. This practice ignores the fact that the individual d ′ itself is an estimator with an inherent variance. For observations with different levels of precision, the arithmetic mean is not the best estimator of a population parameter. It may lead to an estimate with a large variation. Another fact, which is often ignored, is that the variance of individual d ′ involves both between‐ and within‐subject variations in a random effects model when population sensitivity and its level of precision are estimated. Failing to account for both components of variance leads to an underestimate of variation and an overestimate of precision for the estimator. In this paper a lognormal distribution rather than a normal distribution is assumed for individual sensitivity. An iterative weighting procedure is proposed for estimating population sensitivity on the log scale on the basis of a random effects model. An ordinary weighting procedure is proposed for estimating group sensitivity on the log scale on the basis of a fixed effects model. The levels of precision of population and group sensitivity estimators are also given. Numerical examples illustrate the estimation procedures.  相似文献   

12.
In many human movement studies angle-time series data on several groups of individuals are measured. Current methods to compare groups include comparisons of the mean value in each group or use multivariate techniques such as principal components analysis and perform tests on the principal component scores. Such methods have been useful, though discard a large amount of information. Functional data analysis (FDA) is an emerging statistical analysis technique in human movement research which treats the angle-time series data as a function rather than a series of discrete measurements. This approach retains all of the information in the data. Functional principal components analysis (FPCA) is an extension of multivariate principal components analysis which examines the variability of a sample of curves and has been used to examine differences in movement patterns of several groups of individuals. Currently the functional principal components (FPCs) for each group are either determined separately (yielding components that are group-specific), or by combining the data for all groups and determining the FPCs of the combined data (yielding components that summarize the entire data set). The group-specific FPCs contain both within and between group variation and issues arise when comparing FPCs across groups when the order of the FPCs alter in each group. The FPCs of the combined data may not adequately describe all groups of individuals and comparisons between groups typically use t-tests of the mean FPC scores in each group. When these differences are statistically non-significant it can be difficult to determine how a particular intervention is affecting movement patterns or how injured subjects differ from controls. In this paper we aim to perform FPCA in a manner allowing sensible comparisons between groups of curves. A statistical technique called common functional principal components analysis (CFPCA) is implemented. CFPCA identifies the common sources of variation evident across groups but allows the order of each component to change for a particular group. This allows for the direct comparison of components across groups. We use our method to analyze a biomechanical data set examining the mechanisms of chronic Achilles tendon injury and the functional effects of orthoses.  相似文献   

13.
A model is presented for evaluating potential effectiveness of a Bayesian classification system using the expected value of the posterior probability for true classifications as an evaluation metric. For a given set of input parameters, the value of this complex metric is predictable from a simply computed row variance metric. Prediction equations are given for several representative sets of input parameters.  相似文献   

14.
Clusteringn objects intok groups under optimal scaling of variables   总被引:1,自引:0,他引:1  
We propose a method to reduce many categorical variables to one variable withk categories, or stated otherwise, to classifyn objects intok groups. Objects are measured on a set of nominal, ordinal or numerical variables or any mix of these, and they are represented asn points inp-dimensional Euclidean space. Starting from homogeneity analysis, also called multiple correspondence analysis, the essential feature of our approach is that these object points are restricted to lie at only one ofk locations. It follows that thesek locations must be equal to the centroids of all objects belonging to the same group, which corresponds to a sum of squared distances clustering criterion. The problem is not only to estimate the group allocation, but also to obtain an optimal transformation of the data matrix. An alternating least squares algorithm and an example are given.The authors thank Eveline Kroezen and Teije Euverman for their comments on a previous draft of this paper.  相似文献   

15.
On methods in the analysis of profile data   总被引:39,自引:0,他引:39  
This paper is concerned with methods for analyzing quantitative, non-categorical profile data, e.g., a battery of tests given to individuals in one or more groups. It is assumed that the variables have a multinormal distribution with an arbitrary variance-covariance matrix. Approximate procedures based on classical analysis of variance are presented, including an adjustment to the degrees of freedom resulting in conservativeF tests. These can be applied to the case where the variance-covariance matrices differ from group to group. In addition, exact generalized multivariate analysis methods are discussed. Examples are given illustrating both techniques.We are indebted to Mrs. Norma French for performing all the calculations appearing in this paper.  相似文献   

16.
Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of “variance” (Nanova ) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an anova . The Nanova table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.  相似文献   

17.
This paper argues for the validity of inferences that take the form of: A is more X than B; therefore A and B are both X. After considering representative counterexamples, it is claimed that these inferences are valid if and only if the comparative terms in the inference are taken from no more than one comparative set, where a comparative set is understood to be comprised of a positive, comparative, and superlative, represented as {X, more X than, most X}. In all instances where arguments appearing to be of this form are invalid, it is the case that the argument has fallaciously taken terms from more than one comparative set. The fallacy of appealing to more than one comparative set in an inference involving comparative terms is shown to be analogous to the fallacy of equivocation in argumentation. The paper concludes by suggesting a conflation of logical issues with grammatical issues is the core difficulty leading some to consider inferences in the form of A is more X than B; therefore A and B are X to be invalid.  相似文献   

18.
A composite step‐down procedure, in which a set of step‐down tests are summarized collectively with Fisher's combination statistic, was considered to test for multivariate mean equality in two‐group designs. An approximate degrees of freedom (ADF) composite procedure based on trimmed/Winsorized estimators and a non‐pooled estimate of error variance is proposed, and compared to a composite procedure based on trimmed/Winsorized estimators and a pooled estimate of error variance. The step‐down procedures were also compared to Hotelling's T2 and Johansen's ADF global procedure based on trimmed estimators in a simulation study. Type I error rates of the pooled step‐down procedure were sensitive to covariance heterogeneity in unbalanced designs; error rates were similar to those of Hotelling's T2 across all of the investigated conditions. Type I error rates of the ADF composite step‐down procedure were insensitive to covariance heterogeneity and less sensitive to the number of dependent variables when sample size was small than error rates of Johansen's test. The ADF composite step‐down procedure is recommended for testing hypotheses of mean equality in two‐group designs except when the data are sampled from populations with different degrees of multivariate skewness.  相似文献   

19.
The majority of research on sibling relationships has investigated only one or two siblings in a family, but there are many theoretical and methodological limitations to this single dyadic perspective. This study uses multiple siblings (541 adults) in 184 families, where 96 of these families had all siblings complete the study, to demonstrate the value in including full sibling groups when conducting research on sibling relationships. Two scales, positivity and willingness to sacrifice, are evaluated with a multilevel model to account for the nested nature of family relationships. The distribution of variance across three levels: relationship, individual, and family are computed, and results indicate that the relationship level explains the most variance in positivity, whereas the individual level explains the majority of variance in willingness to sacrifice. These distributions are affected by gender composition and family size. The results of this study highlight an important and often overlooked element of family research: The meaning of a scale changes based on its distribution of variance at these three levels. Researchers are encouraged to be cognizant of the variance distribution of their scales when studying sibling relationships and to incorporate more full sibling groups into their research methods and study design.  相似文献   

20.

Sedentary lifestyles have been linked to higher rates of stroke, hypertension, depression, certain types of cancers, and cardiovascular disease, and increased risk of mortality. The link between physical inactivity and health has led to research on how physical activity (PA) interventions might improve health-related quality of life (HRQoL). Estimates of HRQoL improvements are typically focused on targeted at-risk groups, however. Given that almost half of the U.S. adult population is physically inactive, it would be helpful to broaden our understanding of how PA relates to quality of life for the population at large. In this study, we calculated the HRQoL gains attributable to PA across three nationally representative data sets that use different quality of life measures, and assessed the reliability in the results. The data sets used were the Medical Expenditure Panel Survey (MEPS), the Behavioral Risk Factor Surveillance System (BRFSS), and the National Health Interview Survey (NHIS). Quasi-likelihood regression modeling with a beta distribution was used to generate the estimates. We found mean HRQoL scores were very similar across the three data sets and the estimated HRQoLs gained from PA varied only slightly, suggesting that all three provide reliable estimates for the general population.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号