首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The actor–partner interdependence model (APIM) has been widely used for the analysis of pairs of individuals who interact with each other. The goal of this article is to detail in a nontechnical way how the APIM for binary or count outcomes can be implemented and how actor and partner effects can be estimated using generalized estimating equations (GEE) methodology. Both SPSS‐ and SAS‐syntax needed to estimate the model and the interpretation of the output are illustrated using data from a study exploring the effect of satisfaction with the relationship before the breakup on unwanted pursuit behavior after the breakup in formerly married partners. The use of this GEE method will allow researchers to test a wide array of research hypotheses.  相似文献   

2.
This paper discusses a regression model for the analysis of longitudinal count data observed in a panel study. An integer-valued first-order autoregressive [INAR(1)] Poisson process is adapted to represent time-dependent correlations among the counts. By combining the INAR(1)-representation with a random effects approach, a new negative multinomial distribution is derived that includes the bivariate negative binomial distribution proposed by Edwards and Gurland (1961) and Subrahmaniam (1966) as a special case. A detailed analysis of the relationship between personality factors and daily emotion experiences illustrates the approach.This research was partially supported by NSF grant SBR-9409531. The author is grateful to Ulrich Schimmack and Ed Diener for providing the data set used in the application section and for helpful comments on this research.  相似文献   

3.
Discounting is the process by which outcomes lose value. Much of discounting research has focused on differences in the degree of discounting across various groups. This research has relied heavily on conventional null hypothesis significance tests that are familiar to psychologists, such as t‐tests and ANOVAs. As discounting research questions have become more complex by simultaneously focusing on within‐subject and between‐group differences, conventional statistical testing is often not appropriate for the obtained data. Generalized estimating equations (GEE) are one type of mixed‐effects model that are designed to handle autocorrelated data, such as within‐subject repeated‐measures data, and are therefore more appropriate for discounting data. To determine if GEE provides similar results as conventional statistical tests, we compared the techniques across 2,000 simulated data sets. The data sets were created using a Monte Carlo method based on an existing data set. Across the simulated data sets, the GEE and the conventional statistical tests generally provided similar patterns of results. As the GEE and more conventional statistical tests provide the same pattern of result, we suggest researchers use the GEE because it was designed to handle data that has the structure that is typical of discounting data.  相似文献   

4.
This paper extends the biplot technique to canonical correlation analysis and redundancy analysis. The plot of structure correlations is shown to the optimal for displaying the pairwise correlations between the variables of the one set and those of the second. The link between multivariate regression and canonical correlation analysis/redundancy analysis is exploited for producing an optimal biplot that displays a matrix of regression coefficients. This plot can be made from the canonical weights of the predictors and the structure correlations of the criterion variables. An example is used to show how the proposed biplots may be interpreted.  相似文献   

5.
Seven (indeed, plus or minus two) and the detection of correlations   总被引:3,自引:0,他引:3  
Capacity limitations of working memory force people to rely on samples consisting of 7 +/- 2 items. The implications of these limitations for the early detection of correlations between binary variables were explored in a theoretical analysis of the sampling distribution of phi, the contingency coefficient. The analysis indicated that, for strong correlations (phi > .50), sample sizes of 7 +/- 2 are most likely to produce a sample correlation that is more extreme than that of the population. Another analysis then revealed that there is a similar cutoff point at which useful correlations (i.e., for which each variable is a valid predictor of the other) first outnumber correlations for which this is not the case. Capacity limitations are thus shown to maximize the chances for the early detection of strong and useful relations.  相似文献   

6.
A theoretical analysis shows that sample correlations between two binary variables will be inflated when the frequency distributions of the two variables are flatter (i.e., closer to equal frequencies for the two values) in the sample than in the population. A correlation-assessment study in which participants were free to choose their own sample revealed an overwhelming preference for samples that included roughly the same number of observations for the two values of dichotomous variables, irrespective of their actual distribution in the population. Subjective estimates of observed correlations followed the sample correlations--which were inflated, as predicted--more closely than the true correlations. People's sampling behavior thus resembles that of a research designer who maximizes the chance of detecting a relationship, at the cost of diminished accuracy in estimating its strength.  相似文献   

7.
When using linear models for cluster-correlated or longitudinal data, a common modeling practice is to begin by fitting a relatively simple model and then to increase the model complexity in steps. New predictors might be added to the model, or a more complex covariance structure might be specified for the observations. When fitting models for binary or ordered-categorical outcomes, however, comparisons between such models are impeded by the implicit rescaling of the model estimates that takes place with the inclusion of new predictors and/or random effects. This paper presents an approach for putting the estimates on a common scale to facilitate relative comparisons between models fit to binary or ordinal outcomes. The approach is developed for both population-average and unit-specific models.  相似文献   

8.
Sequential multiple assignment randomized trials (SMARTs) are a useful and increasingly popular approach for gathering information to inform the construction of adaptive interventions to treat psychological and behavioral health conditions. Until recently, analysis methods for data from SMART designs considered only a single measurement of the outcome of interest when comparing the efficacy of adaptive interventions. Lu et al. proposed a method for considering repeated outcome measurements to incorporate information about the longitudinal trajectory of change. While their proposed method can be applied to many kinds of outcome variables, they focused mainly on linear models for normally distributed outcomes. Practical guidelines and extensions are required to implement this methodology with other types of repeated outcome measures common in behavioral research. In this article, we discuss implementation of this method with repeated binary outcomes. We explain how to compare adaptive interventions in terms of various summaries of repeated binary outcome measures, including average outcome (area under the curve) and delayed effects. The method is illustrated using an empirical example from a SMART study to develop an adaptive intervention for engaging alcohol- and cocaine-dependent patients in treatment. Monte Carlo simulations are provided to demonstrate the good performance of the proposed technique.  相似文献   

9.
It is shown that a unidimensional monotone latent variable model for binary items implies a restriction on the relative sizes of item correlations: The negative logarithm of the correlations satisfies the triangle inequality. This inequality is not implied by the condition that the correlations are nonnegative, the criterion that coefficient H exceeds 0.30, or manifest monotonicity. The inequality implies both a lower bound and an upper bound for each correlation between two items, based on the correlations of those two items with every possible third item. It is discussed how this can be used in Mokken’s (A theory and procedure of scale-analysis, Mouton, The Hague, 1971) scale analysis.  相似文献   

10.
This article demonstrates the use of mixed-effects logistic regression (MLR) for conducting sequential analyses of binary observational data. MLR is a special case of the mixed-effects logit modeling framework, which may be applied to multicategorical observational data. The MLR approach is motivated in part by G. A. Dagne, G. W. Howe, C. H. Brown, & B. O. Muthén (2002) advances in general linear mixed models for sequential analyses of observational data in the form of contingency table frequency counts. The advantage of the MLR approach is that it circumvents obstacles in the estimation of random sampling error encountered using Dagne and colleagues' approach. This article demonstrates the MLR model in an analysis of observed sequences of communication in a sample of young adult same-sex peer dyads. The results obtained using MLR are compared with those of a parallel analysis using Dagne and colleagues' linear mixed model for binary observational data in the form of log odds ratios. Similarities and differences between the results of the 2 approaches are discussed. Implications for the use of linear mixed models versus mixed-effects logit models for sequential analyses are considered.  相似文献   

11.
A common practice in cognitive modeling is to develop new models specific to each particular task. We question this approach and draw on an existing theory, instance‐based learning theory (IBLT), to explain learning behavior in three different choice tasks. The same instance‐based learning model generalizes accurately to choices in a repeated binary choice task, in a probability learning task, and in a repeated binary choice task within a changing environment. We assert that, although the three tasks are different, the source of learning is equivalent and therefore, the cognitive process elicited should be captured by one single model. This evidence supports previous findings that instance‐based learning is a robust learning process that is triggered in a wide range of tasks from the simple repeated choice tasks to the most dynamic decision making tasks. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
This research was motivated by a clinical trial design for a cognitive study. The pilot study was a matched-pairs design where some data are missing, specifically the missing data coming at the end of the study. Existing approaches to determine sample size are all based on asymptotic approaches (e.g., the generalized estimating equation (GEE) approach). When the sample size in a clinical trial is small to medium, these asymptotic approaches may not be appropriate for use due to the unsatisfactory Type I and II error rates. For this reason, we consider the exact unconditional approach to compute the sample size for a matched-pairs study with incomplete data. Recommendations are made for each possible missingness pattern by comparing the exact sample sizes based on three commonly used test statistics, with the existing sample size calculation based on the GEE approach. An example from a real surgeon-reviewers study is used to illustrate the application of the exact sample size calculation in study designs.  相似文献   

13.
Solving theoretical or empirical issues sometimes involves establishing the equality of two variables with repeated measures. This defies the logic of null hypothesis significance testing, which aims at assessing evidence against the null hypothesis of equality, not for it. In some contexts, equivalence is assessed through regression analysis by testing for zero intercept and unit slope (or simply for unit slope in case that regression is forced through the origin). This paper shows that this approach renders highly inflated Type I error rates under the most common sampling models implied in studies of equivalence. We propose an alternative approach based on omnibus tests of equality of means and variances and in subject-by-subject analyses (where applicable), and we show that these tests have adequate Type I error rates and power. The approach is illustrated with a re-analysis of published data from a signal detection theory experiment with which several hypotheses of equivalence had been tested using only regression analysis. Some further errors and inadequacies of the original analyses are described, and further scrutiny of the data contradict the conclusions raised through inadequate application of regression analyses.  相似文献   

14.
15.
The analysis of continuous hierarchical data such as repeated measures or data from meta‐analyses can be carried out by means of the linear mixed‐effects model. However, in some situations this model, in its standard form, does pose computational problems. For example, when dealing with crossed random‐effects models, the estimation of the variance components becomes a non‐trivial task if only one observation is available for each cross‐classified level. Pseudolikelihood ideas have been used in the context of binary data with standard generalized linear multilevel models. However, even in this case the problem of the estimation of the variance remains non‐trivial. In this paper, we first propose a method to fit a crossed random‐effects model with two levels and continuous outcomes, borrowing ideas from conditional linear mixed‐effects model theory. We also propose a crossed random‐effects model for binary data combining ideas of conditional logistic regression with pseudolikelihood estimation. We apply this method to a case study with data coming from the field of psychometrics and study a series of items (responses) crossed with participants. A simulation study assesses the operational characteristics of the method.  相似文献   

16.
This paper discusses rowwise matrix correlation, based on the weighted sum of correlations between all pairs of corresponding rows of two proximity matrices, which may both be square (symmetric or asymmetric) or rectangular. Using the correlation coefficients usually associated with Pearson, Spearman, and Kendall, three different rowwise test statistics and their normalized coefficients are discussed, and subsequently compared with their nonrowwise alternatives like Mantel'sZ. It is shown that the rowwise matrix correlation coefficient between two matricesX andY is the partial correlation between the entries ofX andY controlled for the nominal variable that has the row objects as categories. Given this fact, partial rowwise correlations (as well as multiple regression extensions in the case of Pearson's approach) can be easily developed.The author wishes to thank the Editor, two referees, Jan van Hooff, and Ruud Derix for their useful comments, and E. J. Dietz for a copy of the algorithm of the Mantel permutation test.  相似文献   

17.
This article compares a variety of imputation strategies for ordinal missing data on Likert scale variables (number of categories = 2, 3, 5, or 7) in recovering reliability coefficients, mean scale scores, and regression coefficients of predicting one scale score from another. The examined strategies include imputing using normal data models with naïve rounding/without rounding, using latent variable models, and using categorical data models such as discriminant analysis and binary logistic regression (for dichotomous data only), multinomial and proportional odds logistic regression (for polytomous data only). The result suggests that both the normal model approach without rounding and the latent variable model approach perform well for either dichotomous or polytomous data regardless of sample size, missing data proportion, and asymmetry of item distributions. The discriminant analysis approach also performs well for dichotomous data. Naïvely rounding normal imputations or using logistic regression models to impute ordinal data are not recommended as they can potentially lead to substantial bias in all or some of the parameters.  相似文献   

18.
纳入式分类分析法能克服传统的分类分析法对后续一元回归模型参数的低估,发挥潜在类别模型的后续分析简化变量间交互作用的功能。本文进一步将纳入式分类分析法拓展至潜在剖面模型后续的多元统计分析中。通过蒙特卡洛模拟实验,比较各种纳入变量的方法思路与后续分析模型在四种常见的多元回归模型中参数估计的表现。结果发现,纳入式分类分析法所需纳入的变量取决于后续分析中与因变量、潜类别变量的关系,且后续分析使用含交互作用的模型更为稳健。  相似文献   

19.
ABSTRACT A recently developed class of multilevel or hierarchical linear models (HLM) provides an intuitive and efficient way to estimate individual growth or change curves. The approach also models the between-subjects variation of the individual change curves with treatment factors and individual attributes. Unlike other repeated measures analysis methods common in the behavioral sciences, HLM allows the fit of data with unequal numbers of repeated observations for each subject, variable timing of observations, and missing data, features which are often characteristic of data from field studies. The application of HLM for the analysis of repeated psychological measures is discussed and illustrated here with depression data for college students. Strengths and limitations of the approach are discussed.  相似文献   

20.
Loftus and Masson (1994) proposed a method for computing confidence intervals (CIs) in repeated measures (RM) designs and later proposed that RM CIs for factorial designs should be based on number of observations rather than number of participants (Masson & Loftus, 2003). However, determining the correct number of observations for a particular effect can be complicated, given that its value depends on the relation between the effect and the overall design. To address this, we recently defined a general number-of-observations principle, explained why it obtains, and provided step-by-step instructions for constructing CIs for various effect types (Jarmasz & Hollands, 2009). In this note, we provide a brief summary of our approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号