Abstract: | Regularization, or shrinkage estimation, refers to a class of statistical methods that constrain the variability of parameter estimates when fitting models to data. These constraints move parameters toward a group mean or toward a fixed point (e.g., 0). Regularization has gained popularity across many fields for its ability to increase predictive power over classical techniques. However, articles published in JEAB and other behavioral journals have yet to adopt these methods. This paper reviews some common regularization schemes and speculates as to why articles published in JEAB do not use them. In response, we propose our own shrinkage estimator that avoids some of the possible objections associated with the reviewed regularization methods. Our estimator works by mixing weighted individual and group (WIG) data rather than by constraining parameters. We test this method on a problem of model selection. Specifically, we conduct a simulation study on the selection of matching‐law‐based punishment models, comparing WIG with ordinary least squares (OLS) regression, and find that, on average, WIG outperforms OLS in this context. |