首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In primary data analysis the individuals who collect the data also analyze it; for meta-analysis an investigator quantitatively combines the statistical results from multiple studies of a phenomenon to reach a conclusion; in secondary data analysis individuals who were not involved in the collection of the data analyze the data. Secondary data analysis may be based on the published data or it may be based on the original data. Most studies of animal cognition involve primary data analysis; it was difficult to identify any that were based on meta-analysis; secondary data analysis based on published data has been used effectively, and examples are given from the research of John Gibbon on scalar timing theory. Secondary data analysis can also be based on the original data if the original data are available in an archive. Such an archive in the field of animal cognition is feasible and desirable.  相似文献   

2.
This article explains the foundational concepts of Bayesian data analysis using virtually no mathematical notation. Bayesian ideas already match your intuitions from everyday reasoning and from traditional data analysis. Simple examples of Bayesian data analysis are presented that illustrate how the information delivered by a Bayesian analysis can be directly interpreted. Bayesian approaches to null-value assessment are discussed. The article clarifies misconceptions about Bayesian methods that newcomers might have acquired elsewhere. We discuss prior distributions and explain how they are not a liability but an important asset. We discuss the relation of Bayesian data analysis to Bayesian models of mind, and we briefly discuss what methodological problems Bayesian data analysis is not meant to solve. After you have read this article, you should have a clear sense of how Bayesian data analysis works and the sort of information it delivers, and why that information is so intuitive and useful for drawing conclusions from data.  相似文献   

3.
Psychologists are directed by ethical guidelines in most areas of their practice. However, there are very few guidelines for conducting data analysis in research. The aim of this article is to address the need for more extensive ethical guidelines for researchers who are post–data collection and beginning their data analyses. Improper data analysis is an ethical issue because it can result in publishing false or misleading conclusions. This article includes a review of ethical implications of improper data analysis and potential causes of unethical practices. In addition, current guidelines in psychology and other areas (e.g., American Psychological Association and American Statistical Association Ethics Codes) were used to inspire a list of recommendations for ethical conduct in data analysis that is appropriate for researchers in psychology.  相似文献   

4.
This paper defines and promotes the qualities of a ??bottom-up?? approach to single-case research (SCR) data analysis. Although ??top-down?? models, for example, multi-level or hierarchical linear models, are gaining momentum and have much to offer, interventionists should be cautious about analyses that are not easily understood, are not governed by a ??wide lens?? visual analysis, do not yield intuitive results, and remove the analysis process from the interventionist, who alone has intimate understanding of the design logic and resulting data patterns. ??Bottom-up?? analysis possesses benefits which fit well with SCR, including applicability to designs with few data points and few phases, customization of analyses based on design and data idiosyncrasies, conformation with visual analysis, and directly meaningful effect sizes. Examples are provided to illustrate these benefits of bottom-up analyses.  相似文献   

5.
互联网时代,人们在网络上留下了各种反映其心理过程与文化特征的信息。这些庞大多样的互联网数据为文化心理学研究提供了新视角。首先,当前文化心理学存在文化差异和文化变迁两种研究取向,互联网数据相较传统来源数据在这两种研究上均存在若干优势;其次,针对4种类型的互联网数据,文化心理学家利用文本分析、多媒体分析、社会网络分析和互联网使用行为分析的方法进行研究;再次,基于互联网数据及其分析方法,文化差异与变迁研究取得了丰硕的成果;最后,当前基于互联网数据的文化心理学研究存在效度、技术与理论局限,未来研究需通过合理抽样、检验新指标有效性、准因果分析、充分运用新技术、数据驱动等方法来提升方法效度、结果深度及理论多样性。  相似文献   

6.
Although it is common in community psychology research to have data at both the community, or cluster, and individual level, the analysis of such clustered data often presents difficulties for many researchers. Since the individuals within the cluster cannot be assumed to be independent, the use of many traditional statistical techniques that assumes independence of observations is problematic. Further, there is often interest in assessing the degree of dependence in the data resulting from the clustering of individuals within communities. In this paper, a random-effects regression model is described for analysis of clustered data. Unlike ordinary regression analysis of clustered data, random-effects regression models do not assume that each observation is independent, but do assume data within clusters are dependent to some degree. The degree of this dependency is estimated along with estimates of the usual model parameters, thus adjusting these effects for the dependency resulting from the clustering of the data. Models are described for both continuous and dichotomous outcome variables, and available statistical software for these models is discussed. An analysis of a data set where individuals are clustered within firms is used to illustrate fetatures of random-effects regression analysis, relative to both individual-level analysis which ignores the clustering of the data, and cluster-level analysis which aggregates the individual data. Preparation of this article was supported by National Heart, Lung, and Blood Institute Grant R18 HL42987-01A1, National Institutes of Mental Health Grant MH44826-01A2, and University of Illinois at Chicago Prevention Research Center Developmental Project CDC Grant R48/CCR505025.  相似文献   

7.
8.
A measure of “clusterability” serves as the basis of a new methodology designed to preserve cluster structure in a reduced dimensional space. Similar to principal component analysis, which finds the direction of maximal variance in multivariate space, principal cluster axes find the direction of maximum clusterability in multivariate space. Furthermore, the principal clustering approach falls into the class of projection pursuit techniques. Comparisons are made with existing methodologies both in a simulation study and analysis of real-world data sets. Furthermore, a demonstration of how to interpret the results of the principal cluster axes is provided on the analysis of Supreme Court voting data and similarities between the interpretation of competing procedures (e.g., factor analysis and principal component analysis) are provided. In addition to the Supreme Court analysis, we analyze several data sets often used to test cluster analysis procedures, including Fisher's Iris data, Agresti's Crab data, and a data set on glass fragments. Finally, discussion is provided to help determine when the proposed procedure will be the most beneficial to the researcher.  相似文献   

9.
Influence analysis is an important component of data analysis, and the local influence approach has been widely applied to many statistical models to identify influential observations and assess minor model perturbations since the pioneering work of Cook (1986) . The approach is often adopted to develop influence analysis procedures for factor analysis models with ranking data. However, as this well‐known approach is based on the observed data likelihood, which involves multidimensional integrals, directly applying it to develop influence analysis procedures for the factor analysis models with ranking data is difficult. To address this difficulty, a Monte Carlo expectation and maximization algorithm (MCEM) is used to obtain the maximum‐likelihood estimate of the model parameters, and measures for influence analysis on the basis of the conditional expectation of the complete data log likelihood at the E‐step of the MCEM algorithm are then obtained. Very little additional computation is needed to compute the influence measures, because it is possible to make use of the by‐products of the estimation procedure. Influence measures that are based on several typical perturbation schemes are discussed in detail, and the proposed method is illustrated with two real examples and an artificial example.  相似文献   

10.
Data often contain periodic components plus random variability. Walsh analysis reveals periodicities by fitting rectangular functions to data. It is analogous to Fourier analysis, which represents data as sine and cosine functions. For many behavioral measures, Fourier transforms can produce spurious peaks in power spectra and fail to resolve separable components. Walsh analysis is superior for strongly discontinuous data. The strengths and weaknesses of each transform are discussed, and specific algorithms are given for the newer Walsh technique.  相似文献   

11.
JOB ANALYSIS MODELS AND JOB CLASSIFICATION   总被引:2,自引:0,他引:2  
Recent research in job classification has focused on the appropriate data analysis model for analyzing the similarities and differences among jobs. In the present research, the data analysis model is held constant, and the type of job analysis data is varied to examine the effect on the resulting job classification decisions. Seven foremen jobs in a chemical processing plant were analyzed using three different levels of job analysis data: task-oriented, worker-oriented, and abilities-oriented. All three sets of data were analyzed using the same hierarchical clustering procedure. Results indicated that the number and type of resulting job clusters was clearly dictated by the type of job analysis data that was used to compare the foremen jobs. Practical implications of these findings are presented.  相似文献   

12.
Dual scaling is a set of related techniques for the analysis of a wide assortment of categorical data types including contingency tables and multiple-choice, rank order, and paired comparison data. When applied to a contingency table, dual scaling also goes by the name "correspondence analysis," and when applied to multiple-choice data in which there are more than 2 items, "optimal scaling" and "multiple correspondence analysis. " Our aim of this article was to explain in nontechnical terms what dual scaling offers to an analysis of contingency table and multiple-choice data.  相似文献   

13.
Multiple‐set canonical correlation analysis and principal components analysis are popular data reduction techniques in various fields, including psychology. Both techniques aim to extract a series of weighted composites or components of observed variables for the purpose of data reduction. However, their objectives of performing data reduction are different. Multiple‐set canonical correlation analysis focuses on describing the association among several sets of variables through data reduction, whereas principal components analysis concentrates on explaining the maximum variance of a single set of variables. In this paper, we provide a unified framework that combines these seemingly incompatible techniques. The proposed approach embraces the two techniques as special cases. More importantly, it permits a compromise between the techniques in yielding solutions. For instance, we may obtain components in such a way that they maximize the association among multiple data sets, while also accounting for the variance of each data set. We develop a single optimization function for parameter estimation, which is a weighted sum of two criteria for multiple‐set canonical correlation analysis and principal components analysis. We minimize this function analytically. We conduct simulation studies to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of functional neuroimaging data to illustrate its empirical usefulness.  相似文献   

14.
A number of researchers have argued that ipsative data are not suitable for statistical procedures designed for normative data. Others have argued that the interpretability of such analyses of ipsative data are little affected where the number of variables and the sample size are sufficiently large. The research reported here represents a factor analysis of the scores on the Canfield Learning Styles Inventory for 1,252 students in vocational education. The results of the factor analysis of these ipsative data were examined in a context of existing theory and research on vocational students and lend support to the argument that the factor analysis of ipsative data can provide sensibly interpretable results.  相似文献   

15.
Time series analysis (TSA) is one of a number of new methods of data analysis appropriate for longitudinal data. Simonton (1998) applied TSA to an analysis of the causal relationship between two types of stress and both the physical and mental health of George III. This innovative application demonstrates both the strengths and weaknesses of time series analysis. Time series is applicable to a unique class of problems, can use information about temporal ordering to make statements about causation, and focuses on patterns of change over time, all strengths of the Simonton study. Time series analysis also suffers from a number of weaknesses, including problems with generalization from a single study, difficulty in obtaining appropriate measures, and problems with accurately identifying the correct model to represent the data. While careful attempts are made to minimize these problems, each is present in the Simonton study, although sometimes in a subtle manner. Changes in how the data could be gathered are suggested that might help to solve some of these problems in future studies. Finally, the advantages and disadvantages of employing alternative methods for analyzing multivariate time series data, including dynamic factor analysis, are discussed.  相似文献   

16.
The computer program ALICE solves the two major problems of data manipulation and analysis. First, ALICE allows the user to treat data from an experiment in the form they are generated. Second, mathematical calculations and statistical analyses are included as an intrinsic part of the multidimensional approach to data handling. ALICE accepts raw data in the form and order they were collected; reorganizes, partitions, or selects any subset of them (including a single entry), and arithmetically combines, transforms, or evaluates any formula involving them. Furthermore, learning to use ALICE is simple, even for those who are naive to both computers and data analysis.  相似文献   

17.
Some mathematical notes on three-mode factor analysis   总被引:10,自引:0,他引:10  
The model for three-mode factor analysis is discussed in terms of newer applications of mathematical processes including a type of matrix process termed the Kronecker product and the definition of combination variables. Three methods of analysis to a type of extension of principal components analysis are discussed. Methods II and III are applicable to analysis of data collected for a large sample of individuals. An extension of the model is described in which allowance is made for unique variance for each combination variable when the data are collected for a large sample of individuals.  相似文献   

18.
Principal component analysis (PCA) and common factor analysis are often used to model latent data structures. Typically, such analyses assume a single population whose correlation or covariance matrix is modelled. However, data may sometimes be unwittingly sampled from mixed populations containing a taxon (nonarbitrary subpopulation) and its complement class. One derives relations between values of PCA parameters within subpopulations and their values in the mixed population. These results are then extended to factor analysis in mixed populations. As relationships between subpopulation and mixed-population principal components and factors sensitively depend on within-subpopulation structures and between-subpopulation differences, naive interpretation of PCA or factor analytic findings can potentially mislead. Several analyses, better suited to the dimensional analysis of admixture data structures, are presented and compared.  相似文献   

19.
The recommendation to base the analysis of multi-wave data upon explicit models for change is advocated. Several univariate and multivariate models are described, which emerge from an interaction between the classical test theory and the structural equation modeling approach. The resulting structural models for analyzing change reflect in some of their parameters substantively interesting aspects of intra- and interindividual change in follow-up studies. The models are viewed as an alternative to an ANOVA-based analysis of longitudinal data, and are illustrated on data from a cognitive intervention study of old adults (Bakes et al , 1986). The approach presents a useful means of analyzing change over time, and is applicable for purposes of (latent) growth curve analysis when analysis of variance assumptions are violated (e.g., Schaie & Hertzog, 1982; Morrison, 1976).  相似文献   

20.
Gait data are typically collected in multivariate form, so some multivariate analysis is often used to understand interrelationships between observed data. Principal Component Analysis (PCA), a data reduction technique for correlated multivariate data, has been widely applied by gait analysts to investigate patterns of association in gait waveform data (e.g., interrelationships between joint angle waveforms from different subjects and/or joints). Despite its widespread use in gait analysis, PCA is for two-mode data, whereas gait data are often collected in higher-mode form. In this paper, we present the benefits of analyzing gait data via Parallel Factor Analysis (Parafac), which is a component analysis model designed for three- or higher-mode data. Using three-mode joint angle waveform data (subjects×time×joints), we demonstrate Parafac's ability to (a) determine interpretable components revealing the primary interrelationships between lower-limb joints in healthy gait and (b) identify interpretable components revealing the fundamental differences between normal and perturbed subjects' gait patterns across multiple joints. Our results offer evidence of the complex interconnections that exist between lower-limb joints and limb segments in both normal and abnormal gaits, confirming the need for the simultaneous analysis of multi-joint gait waveform data (especially when studying perturbed gait patterns).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号