首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Explaining group-level outcomes from individual-level predictors requires aggregating the individual-level scores to the group level and correcting the group-level estimates for measurement errors in the aggregated scores. However, for discrete variables it is not clear how to perform the aggregation and correction. It is shown how stepwise latent class analysis can be used to do this. First, a latent class model is estimated in which the scores on a discrete individual-level predictor are used to construct group-level latent classes. Second, this latent class model is used to aggregate the individual-level predictor by assigning the groups to the latent classes. Third, a group-level analysis is performed in which the aggregated measures are related to the remaining group-level variables while correcting for the measurement error in the class assignments. This stepwise approach is introduced in a multilevel mediation model with a single individual-level mediator, and compared to existing methods in a simulation study. We also show how a mediation model with multiple group-level latent variables can be used with multiple individual-level mediators and this model is applied to explain team productivity (group level) as a function of job control (individual level), job satisfaction (individual level), and enriched job design (group level).  相似文献   

2.

Purpose

In this investigation, we argue for why and how available intraclass correlation coefficients and other types of reliability estimates can be employed as sample-based reliability estimates within primary and meta-analytic studies when relationships between group-level phenomena are of interest.

Design/Methodology/Approach

Group-level correlations and reliability estimates were obtained from 46 studies examining organizational climate–performance relationships. We illustrate how the group-level reliability estimates can be used to correct correlations for predictor and criterion unreliability. Procedures are presented for computing the sampling variances of individually corrected correlations that account for sampling error in the group-level reliability estimates.

Findings

Support was found for the conservative nature of meta-analytic parameter estimates when group-level reliability information is sample-based as opposed to assumed population values. In addition, our analyses indicated that conclusions about substantive relationships between group-level variables can change based on availability of sample-based reliabilities within both primary and meta-analytic studies.

Implications

Results from this study suggest that researchers should rely on sample-based meta-analytic procedures when examining the generalizability of group-level relationships. This study also demonstrates the importance of using all available reliability information and accounting for sampling error in the reliability estimates when conducting meta-analyses at the group level of analysis.

Originality/Value

This study breaks ground by systematically examining the use of intraclass correlation coefficients as reliability estimates within group-level meta-analytic studies. Furthermore, illustrative analyses provide guidance to primary and meta-analytic researchers in regard to how to correct group-level correlations for unreliability in the predictor, criterion, or both whenever and in whatever proportions the artifact information is available.
  相似文献   

3.
Behavioral researchers often linearly regress a criterion on multiple predictors, aiming to gain insight into the relations between the criterion and predictors. Obtaining this insight from the ordinary least squares (OLS) regression solution may be troublesome, because OLS regression weights show only the effect of a predictor on top of the effects of other predictors. Moreover, when the number of predictors grows larger, it becomes likely that the predictors will be highly collinear, which makes the regression weights’ estimates unstable (i.e., the “bouncing beta” problem). Among other procedures, dimension-reduction-based methods have been proposed for dealing with these problems. These methods yield insight into the data by reducing the predictors to a smaller number of summarizing variables and regressing the criterion on these summarizing variables. Two promising methods are principal-covariate regression (PCovR) and exploratory structural equation modeling (ESEM). Both simultaneously optimize reduction and prediction, but they are based on different frameworks. The resulting solutions have not yet been compared; it is thus unclear what the strengths and weaknesses are of both methods. In this article, we focus on the extents to which PCovR and ESEM are able to extract the factors that truly underlie the predictor scores and can predict a single criterion. The results of two simulation studies showed that for a typical behavioral dataset, ESEM (using the BIC for model selection) in this regard is successful more often than PCovR. Yet, in 93% of the datasets PCovR performed equally well, and in the case of 48 predictors, 100 observations, and large differences in the strengths of the factors, PCovR even outperformed ESEM.  相似文献   

4.
Dominance‐based ordinal multiple regression (DOR) is designed to answer ordinal questions about relationships among ordinal variables. Only one parameter per predictor is estimated, and the number of parameters is constant for any number of outcome levels. The majority of existing simulation evaluations of DOR use predictors that are continuous or ordinal with many categories, so the performance of the method is not well understood for ordinal variables with few categories. This research evaluates DOR in simulations using three‐category ordinal variables for the outcome and predictors, with a comparison to the cumulative logits proportional odds model (POC). Although ordinary least squares (OLS) regression is inapplicable for theoretical reasons, it was also included in the simulations because of its popularity in the social sciences. Most simulation outcomes indicated that DOR performs well for variables with few categories, and is preferable to the POC for smaller samples and when the proportional odds assumption is violated. Nevertheless, confidence interval coverage for DOR was not flawless and possibilities for improvement are suggested.  相似文献   

5.
Multiple correlation analysis and means tests were used to test the effectiveness of fifteen psychological, economic and other variables in explaining the variance in post job transfer satisfaction among a sample of managers.
Of the variables to emerge as significant predictors of satisfaction in both of two subsamples, preference for location of residence predominated. Multiple Classification Analysis (MCA) was then employed to explore which of sixteen urban factors was most important in determining location preference among the managers and their spouses. The MCA analysis showed that size of the city ranked as the major predictor of location preference among both the managers and their spouses.  相似文献   

6.
Abstract

Extended redundancy analysis (ERA) combines linear regression with dimension reduction to explore the directional relationships between multiple sets of predictors and outcome variables in a parsimonious manner. It aims to extract a component from each set of predictors in such a way that it accounts for the maximum variance of outcome variables. In this article, we extend ERA into the Bayesian framework, called Bayesian ERA (BERA). The advantages of BERA are threefold. First, BERA enables to make statistical inferences based on samples drawn from the joint posterior distribution of parameters obtained from a Markov chain Monte Carlo algorithm. As such, it does not necessitate any resampling method, which is on the other hand required for (frequentist’s) ordinary ERA to test the statistical significance of parameter estimates. Second, it formally incorporates relevant information obtained from previous research into analyses by specifying informative power prior distributions. Third, BERA handles missing data by implementing multiple imputation using a Markov Chain Monte Carlo algorithm, avoiding the potential bias of parameter estimates due to missing data. We assess the performance of BERA through simulation studies and apply BERA to real data regarding academic achievement.  相似文献   

7.
A theoretical discussion of the factor pattern of predictor tests and criterion shows that ordinary test selection methods break down under certain circumstances. It is shown that maximal resultsmay not occur if suppressor variables are present among the predictors. Suggested solutions to the problem include: (1) prior item analysis of tests against the criterion, (2) selection of several trial batteries including some with suppressor variables on the basis of a factor analysis of tests and criterion, (3) modification of the usual test selection procedures to include separate solutions based upon each of several starting variables, or (4) the cumbersome and tedious solution of all possible combinations of predictors. The solutions are recommended in the order named above. Although all of the suggested solutions involve added labor and may not be necessary, the test or battery constructor should at least be aware of the problem.  相似文献   

8.
Multilevel analyses are often used to estimate the effects of group-level constructs. However, when using aggregated individual data (e.g., student ratings) to assess a group-level construct (e.g., classroom climate), the observed group mean might not provide a reliable measure of the unobserved latent group mean. In the present article, we propose a Bayesian approach that can be used to estimate a multilevel latent covariate model, which corrects for the unreliable assessment of the latent group mean when estimating the group-level effect. A simulation study was conducted to evaluate the choice of different priors for the group-level variance of the predictor variable and to compare the Bayesian approach with the maximum likelihood approach implemented in the software Mplus. Results showed that, under problematic conditions (i.e., small number of groups, predictor variable with a small ICC), the Bayesian approach produced more accurate estimates of the group-level effect than the maximum likelihood approach did.  相似文献   

9.
Moderation analysis is useful for addressing interesting research questions in social sciences and behavioural research. In practice, moderated multiple regression (MMR) models have been most widely used. However, missing data pose a challenge, mainly because the interaction term is a product of two or more variables and thus is a non-linear function of the involved variables. Normal-distribution-based maximum likelihood (NML) has been proposed and applied for estimating MMR models with incomplete data. When data are missing completely at random, moderation effect estimates are consistent. However, simulation results have found that when data in the predictor are missing at random (MAR), NML can yield inaccurate estimates of moderation effects when the moderation effects are non-null. Simulation studies are subject to the limitation of confounding systematic bias with sampling errors. Thus, the purpose of this paper is to analytically derive asymptotic bias of NML estimates of moderation effects with MAR data. Results show that when the moderation effect is zero, there is no asymptotic bias in moderation effect estimates with either normal or non-normal data. When the moderation effect is non-zero, however, asymptotic bias may exist and is determined by factors such as the moderation effect size, missing-data proportion, and type of missingness dependence. Our analytical results suggest that researchers should apply NML to MMR models with caution when missing data exist. Suggestions are given regarding moderation analysis with missing data.  相似文献   

10.
We show that power and sample size tables developed by Cohen (1988, pp. 289–354, 381–389) produce incorrect estimates for factorial designs: power is underestimated, and sample size is overestimated. The source of this bias is shrinkage in the implied value of the noncentrality parameter, λ, caused by using Cohen’s adjustment ton for factorial designs (pp. 365 and 396). The adjustment was intended to compensate for differences in the actual versus presumed (by the tables) error degrees of freedom; however, more accurate estimates are obtained if the tables are used without adjustment. The problems with Cohen’s procedure were discovered while testing subroutines in DATASIM 1.2 for computing power and sample size in completely randomized, randomized-blocks, and split-plot factorial designs. The subroutines give the user the ability to generate power and sample size tables that are as easy to use as Cohen’s, but that eliminate the conservative bias of his tables. We also implemented several improvements relative to “manual” use of Cohen’s tables: (1) Since the user can control the specific values of 1- β,n, andf used on the rows and columns of the table, interpolation is never required; (2) exact as opposed to approximate solutions for the noncentralF distribution are employed; (3) solutions for factorial designs, including those with repeated measures factors, take into account the actual error degrees of freedom for the effect being tested; and (4) provision is made for the computation of power for applications involving the doubly noncentralF distribution.  相似文献   

11.

Purpose

Given the common practice of using employee attitude surveys as a group-level intervention, this study used a group-level approach to examine the relationship between group satisfaction and group nonresponse.

Design/Methodology/Approach

Samples from four large organizations enabled job satisfaction scores to be aggregated to the work group level and correlated with group-level response rates. Additional regression analysis was conducted to control for a number of confounding variables at the group level.

Findings

Aggregate job satisfaction showed significant associations with group-level response rates across each of the samples examined. Work groups with higher aggregate job satisfaction had significantly higher response rates. Regression analyses showed that, in addition to job satisfaction, work group size, heterogeneity in tenure, and heterogeneity in gender composition all had significant effects on response rates.

Implications

Social influence processes may operate at the group level to increase homogeneity of job-relevant attitudes and similarity in survey response behavior. Future research should be designed to investigate the effects of group-level variables on nonresponse.

Originality/Value

The current study adds to the literature by demonstrating that work group variables may play an important role in explaining nonresponse in employee attitude surveys. Because the processes underlying survey response are likely to be different at different levels of analysis, the investigation of nonresponse as a group-level phenomenon creates new opportunities for research and practice.  相似文献   

12.
Regularization, or shrinkage estimation, refers to a class of statistical methods that constrain the variability of parameter estimates when fitting models to data. These constraints move parameters toward a group mean or toward a fixed point (e.g., 0). Regularization has gained popularity across many fields for its ability to increase predictive power over classical techniques. However, articles published in JEAB and other behavioral journals have yet to adopt these methods. This paper reviews some common regularization schemes and speculates as to why articles published in JEAB do not use them. In response, we propose our own shrinkage estimator that avoids some of the possible objections associated with the reviewed regularization methods. Our estimator works by mixing weighted individual and group (WIG) data rather than by constraining parameters. We test this method on a problem of model selection. Specifically, we conduct a simulation study on the selection of matching‐law‐based punishment models, comparing WIG with ordinary least squares (OLS) regression, and find that, on average, WIG outperforms OLS in this context.  相似文献   

13.
Numerous rules-of-thumb have been suggested for determining the minimum number of subjects required to conduct multiple regression analyses. These rules-of-thumb are evaluated by comparing their results against those based on power analyses for tests of hypotheses of multiple and partial correlations. The results did not support the use of rules-of-thumb that simply specify some constant (e.g., 100 subjects) as the minimum number of subjects or a minimum ratio of number of subjects (N) to number of predictors (m). Some support was obtained for a rule-of-thumb that N ≥ 50 + 8 m for the multiple correlation and N ≥104 + m for the partial correlation. However, the rule-of-thumb for the multiple correlation yields values too large for N when m ≥ 7, and both rules-of-thumb assume all studies have a medium-size relationship between criterion and predictors. Accordingly, a slightly more complex rule-of thumb is introduced that estimates minimum sample size as function of effect size as well as the number of predictors. It is argued that researchers should use methods to determine sample size that incorporate effect size.  相似文献   

14.
The method of oversampling data from a preselected range of a variable’s distribution is often applied by researchers who wish to study rare outcomes without substantially increasing sample size. Despite frequent use, however, it is not known whether this method introduces statistical bias due to disproportionate representation of a particular range of data. The present study employed simulated data sets to examine how oversampling introduces systematic bias in effect size estimates (of the relationship between oversampled predictor variables and the outcome variable), as compared with estimates based on a random sample. In general, results indicated that increased oversampling was associated with a decrease in the absolute value of effect size estimates. Critically, however, the actual magnitude of this decrease in effect size estimates was nominal. This finding thus provides the first evidence that the use of the oversampling method does not systematically bias results to a degree that would typically impact results in behavioral research. Examining the effect of sample size on oversampling yielded an additional important finding: For smaller samples, the use of oversampling may be necessary to avoid spuriously inflated effect sizes, which can arise when the number of predictor variables and rare outcomes is comparable.  相似文献   

15.
16.
Stevens’ power law for the judgments of sensation has a long history in psychology and is used in many psychophysical investigations of the effects of predictors such as group or condition. Stevens’ formulation \(\varPsi = {aP}^{n}\), where \(\varPsi \) is the psychological judgment, P is the physical intensity, and \(n\) is the power law exponent, is usually tested by plotting log \((\varPsi )\) against log (P). In some, but by no means all, studies, effects on the scale parameter, \(a\), are also investigated. This two-parameter model is simple but known to be flawed, for at least some modalities. Specifically, three-parameter functions that include a threshold parameter produce a better fit for many data sets. In addition, direct non-linear computation of power laws often fit better than regressions of log-transformed variables. However, such potentially flawed methods continue to be used because of assumptions that the approximations are “close enough” as to not to make any difference to the conclusions drawn (or possibly through ignorance the errors in these assumptions). We investigate two modalities in detail: duration and roughness. We show that a three-parameter power law is the best fitting of several plausible models. Comparison between this model and the prevalent two parameter version of Stevens’ power law shows significant differences for the parameter estimates with at least medium effect sizes for duration.  相似文献   

17.
While effect size estimates, post hoc power estimates, and a priori sample size determination are becoming a routine part of univariate analyses involving measured variables (e.g., ANOVA), such measures and methods have not been articulated for analyses involving latent means. The current article presents standardized effect size measures for latent mean differences inferred from both structured means modeling and MIMIC approaches to hypothesis testing about differences among means on a single latent construct. These measures are then related to post hoc power analysis, a priori sample size determination, and a relevant measure of construct reliability.I wish to convey my appreciation to the reviewers and Associate Editor, whose suggestions extended and strengthened the article's content immensely, and to Ralph Mueller of The George Washington University for enhancing the clarity of its presentation.  相似文献   

18.
Ayala Cohen 《Psychometrika》1986,51(3):379-391
A test is proposed for the equality of the variances ofk 2 correlated variables. Pitman's test fork = 2 reduces the null hypothesis to zero correlation between their sum and their difference. Its extension, eliminating nuisance parameters by a bootstrap procedure, is valid for any correlation structure between thek normally distributed variables. A Monte Carlo study for several combinations of sample sizes and number of variables is presented, comparing the level and power of the new method with previously published tests. Some nonnormal data are included, for which the empirical level tends to be slightly higher than the nominal one. The results show that our method is close in power to the asymptotic tests which are extremely sensitive to nonnormality, yet it is robust and much more powerful than other robust tests.This research was supported by the fund for the promotion of research at the Technion.  相似文献   

19.
Mixture factor analysis is examined as a means of flexibly estimating nonnormally distributed continuous latent factors in the presence of both continuous and dichotomous observed variables. A simulation study compares mixture factor analysis with normal maximum likelihood (ML) latent factor modeling. Different results emerge for continuous versus dichotomous outcomes. For dichotomous outcomes, normal ML path estimates have bias that worsens as latent factor skew/kurtosis increases and does not diminish as sample size increases, whereas the mixture factor analysis model produces nearly unbiased estimators as sample sizes increase (500 and greater) and offers near nominal coverage probability. For continuous outcome variables, both methods produce factor loading estimates with minimal bias regardless of latent factor skew, but the mixture factor analysis is more efficient. The method is demonstrated using data motivated by a study on youth with cystic fibrosis examining predictors of treatment adherence. In summary, mixture factor analysis provides improvements over normal ML estimation in the presence of skewed/kurtotic latent factors, but due to variability in the estimator relating the latent factor to dichotomous outcomes and computational issues, the improvements were only fully realized, in this study, at larger sample sizes (500 and greater).  相似文献   

20.
Abstract

When estimating multiple regression models with incomplete predictor variables, it is necessary to specify a joint distribution for the predictor variables. A convenient assumption is that this distribution is a multivariate normal distribution, which is also the default in many statistical software packages. This distribution will in general be misspecified if predictors with missing data have nonlinear effects (e.g., x2) or are included in interaction terms (e.g., x·z). In the present article, we introduce a factored regression modeling approach for estimating regression models with missing data that is based on maximum likelihood estimation. In this approach, the model likelihood is factorized into a part that is due to the model of interest and a part that is due to the model for the incomplete predictors. In three simulation studies, we showed that the factored regression modeling approach produced valid estimates of interaction and nonlinear effects in regression models with missing values on categorical or continuous predictor variables under a broad range of conditions. We developed the R package mdmb, which facilitates a user-friendly application of the factored regression modeling approach, and present a real-data example that illustrates the flexibility of the software.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号