首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
在心理学、教育学和临床医学等领域, 越来越多的研究者开始关注个体内部的行为、心理、临床效果等随时间而产生的动态变化, 重视针对个体的差异化建模。密集追踪是一种在短时间内对个体进行多个时间节点密集追踪测量的方法, 更适合用于研究个体内部心理过程等的动态变化及其作用机制。近年来, 密集追踪成为心理学研究的一大热点, 但许多密集追踪的研究分析仍停留在较为传统的方法。方法学领域已涌现出较多用于密集追踪数据分析的模型方法, 较为主流的模型包括以动态结构方程模型(Dynamic Structural Equation Model, DSEM)为代表的自上而下的建模方法, 以及以组迭代多模型估计(Group Iterative Multiple Model Estimation, GIMME)为代表的自下而上的建模方法。二者均可以方便地对密集追踪数据中的自回归及交叉滞后效应进行建模。  相似文献   

2.
Tutorial on maximum likelihood estimation   总被引:2,自引:0,他引:2  
  相似文献   

3.
Researchers who collect multivariate time-series data across individuals must decide whether to model the dynamic processes at the individual level or at the group level. A recent innovation, group iterative multiple model estimation (GIMME), offers one solution to this dichotomy by identifying group-level time-series models in a data-driven manner while also reliably recovering individual-level patterns of dynamic effects. GIMME is unique in that it does not assume homogeneity in processes across individuals in terms of the patterns or weights of temporal effects. However, it can be difficult to make inferences from the nuances in varied individual-level patterns. The present article introduces an algorithm that arrives at subgroups of individuals that have similar dynamic models. Importantly, the researcher does not need to decide the number of subgroups. The final models contain reliable group-, subgroup-, and individual-level patterns that enable generalizable inferences, subgroups of individuals with shared model features, and individual-level patterns and estimates. We show that integrating community detection into the GIMME algorithm improves upon current standards in two important ways: (1) providing reliable classification and (2) increasing the reliability in the recovery of individual-level effects. We demonstrate this method on functional MRI from a sample of former American football players.  相似文献   

4.
Over the past decade, Mokken scale analysis (MSA) has rapidly grown in popularity among researchers from many different research areas. This tutorial provides researchers with a set of techniques and a procedure for their application, such that the construction of scales that have superior measurement properties is further optimized, taking full advantage of the properties of MSA. First, we define the conceptual context of MSA, discuss the two item response theory (IRT) models that constitute the basis of MSA, and discuss how these models differ from other IRT models. Second, we discuss dos and don'ts for MSA; the don'ts include misunderstandings we have frequently encountered with researchers in our three decades of experience with real‐data MSA. Third, we discuss a methodology for MSA on real data that consist of a sample of persons who have provided scores on a set of items that, depending on the composition of the item set, constitute the basis for one or more scales, and we use the methodology to analyse an example real‐data set.  相似文献   

5.
In recent years, researchers and practitioners in the behavioral sciences have profited from a growing literature on delay discounting. The purpose of this article is to provide readers with a brief tutorial on how to use Microsoft Office Excel 2010 and Excel for Mac 2011 to analyze discounting data to yield parameters for both the hyperbolic discounting model and area under the curve. This tutorial is intended to encourage the quantitative analysis of behavior in both research and applied settings by readers with relatively little formal training in nonlinear regression.  相似文献   

6.
Examining disparities in social outcomes as a function of gender, age, or race has a long tradition in psychology and other social sciences. With an increasing availability of large naturalistic data sets, researchers are afforded the opportunity to study the effects of demographic characteristics with real‐world data and high statistical power. However, since traditional studies rely on human raters to asses demographic characteristics, limits in participant pools can hinder researchers from analyzing large data sets. Automated procedures offer a new solution to the classification of face images. Here, we present a tutorial on how to use two face classification algorithms, Face++ and Kairos. We also test and compare their accuracy under varying conditions and provide practical recommendations for their use. Drawing on two face databases (n = 2,805 images), we find that classification accuracy is (a) relatively high, with Kairos generally outperforming Face++ (b) similar for standardized and more variable images, and (c) dependent on target demographics. For example, accuracy was lower for Hispanic and Asian (vs. Black and White) targets. In sum, we propose that automated face classification can be a useful tool for researchers interested in studying the effects of demographic characteristics in large naturalistic data sets.  相似文献   

7.
Applied researchers increasingly report the use of paraprofessionals to implement key program components. However, despite such apparent advantages as increased availability and lower salaries, problems in maintaining acceptable levels of on-the-job performance in these workers have been reported. This study assessed the effects of a supervisory package on the work behavior of five paraprofessional tutors in a remedial reading program. The package consisted of written handouts and instructions, tests of tutor understanding, a video tape, mention of tutor performance by supervisors, and publicly posted feedback on work performance. One randomly chosen tutor received feedback each day on (1) his degree of completeness in tutoring one student's answers to comprehension-check questions, (2) his accuracy in tabulating that student's data sheet, and (3) his promptness in beginning the first student's tutorial session. The supervisory package produced marked improvement in completeness, some improvement in accuracy, but no improvement in promptness. Application of the supervisory package seemed practical, as an average of nine daily tutorial sessions (approximately 270 min of tutoring) required a total daily average of only 28 min of supervision. It was concluded that completeness performance by nondegreed, paraprofessional tutors was closely related to the extent to which they were supervised. Despite the fact that no improvement was observed in tutor promptness, and only partial improvement was observed in tutor accuracy, the improvement to near-perfect levels in tutor completeness suggests that further research is warranted to develop supervisory packages that might ensure the reliable and efficient use of paraprofessional workers.  相似文献   

8.
Event history calendars (EHCs) are popular tools for retrospective data collection. Originally conceptualized as face‐to‐face interviews, EHCs contain various questions about the respondents' autobiography in order to use their experiences as cues to facilitate remembering. For relationship researchers, EHCs are particularly valuable when trying to reconstruct the relational past of individuals. However, although many studies are conducted online nowadays, no freely available online adaptation of the EHC is available yet. In this tutorial, detailed instructions are provided on how to implement an online EHC for the reconstruction of romantic relationship histories within the open‐source framework formr. Ways to customize the online EHC and provide a template for researchers to adapt the tool for their own purposes are showcased.  相似文献   

9.
The p2 model is a statistical model for the analysis of binary relational data with covariates, as occur in social network studies. It can be characterized as a multinomial regression model with crossed random effects that reflect actor heterogeneity and dependence between the ties from and to the same actor in the network. Three Markov chain Monte Carlo (MCMC) estimation methods for the p2 model are presented to improve iterative generalized least squares (IGLS) estimation developed earlier, two of which use random walk proposals. The third method, an independence chain sampler, and one of the random walk algorithms use normal approximations of the binary network data to generate proposals in the MCMC algorithms. A large‐scale simulation study compares MCMC estimates with IGLS estimates for networks with 20 and 40 actors. It was found that the IGLS estimates have a smaller variance but are severely biased, while the MCMC estimates have a larger variance with a small bias. For networks with 20 actors, mean squared errors are generally comparable or smaller for the IGLS estimates. For networks with 40 actors, mean squared errors are the smallest for the MCMC estimates. Coverage rates of confidence intervals are good for the MCMC estimates but not for the IGLS estimates.  相似文献   

10.
Deterministic blockmodelling is a well-established clustering method for both exploratory and confirmatory social network analysis seeking partitions of a set of actors so that actors within each cluster are similar with respect to their patterns of ties to other actors (or, in some cases, other objects when considering two-mode networks). Even though some of the historical foundations for certain types of blockmodelling stem from the psychological literature, applications of deterministic blockmodelling in psychological research are relatively rare. This scarcity is potentially attributable to three factors: a general unfamiliarity with relevant blockmodelling methods and applications; a lack of awareness of the value of partitioning network data for understanding group structures and processes; and the unavailability of such methods on software platforms familiar to most psychological researchers. To tackle the first two items, we provide a tutorial presenting a general framework for blockmodelling and describe two of the most important types of deterministic blockmodelling applications relevant to psychological research: structural balance partitioning and two-mode partitioning based on structural equivalence. To address the third problem, we developed a suite of software programs that are available as both Fortran executable files and compiled Fortran dynamic-link libraries that can be implemented in the R software system. We demonstrate these software programs using networks from the literature.  相似文献   

11.
Some computational and statistical techniques that can be used in the analysis of event-related potential (ERP) data are demonstrated. The techniques are fairly elementary but go one step further than do simple area measurement or peak picking, which are most often used in ERP analysis. Both amplitude and latency measurement techniques are considered. Principal components analysis (PCA) and methods for electromyographic onset determination are presented in detail, and Woody filtering is discussed briefly. The techniques are introduced in a nontechnical, tutorial review style. One and the same existing data set is presented, to which the techniques are applied, and practical guidelines for their use are given. The methods are demonstrated against a background of theoretical notions that are related to the definition of ERP components.  相似文献   

12.
A key problem in statistical modeling is model selection, that is, how to choose a model at an appropriate level of complexity. This problem appears in many settings, most prominently in choosing the number of clusters in mixture models or the number of factors in factor analysis. In this tutorial, we describe Bayesian nonparametric methods, a class of methods that side-steps this issue by allowing the data to determine the complexity of the model. This tutorial is a high-level introduction to Bayesian nonparametric methods and contains several examples of their application.  相似文献   

13.
Structural equation modeling: reviewing the basics and moving forward   总被引:4,自引:0,他引:4  
This tutorial begins with an overview of structural equation modeling (SEM) that includes the purpose and goals of the statistical analysis as well as terminology unique to this technique. I will focus on confirmatory factor analysis (CFA), a special type of SEM. After a general introduction, CFA is differentiated from exploratory factor analysis (EFA), and the advantages of CFA techniques are discussed. Following a brief overview, the process of modeling will be discussed and illustrated with an example using data from a HIV risk behavior evaluation of homeless adults (Stein & Nyamathi, 2000). Techniques for analysis of nonnormally distributed data as well as strategies for model modification are shown. The empirical example examines the structure of drug and alcohol use problem scales. Although these scales are not specific personality constructs, the concepts illustrated in this article directly correspond to those found when analyzing personality scales and inventories. Computer program syntax and output for the empirical example from a popular SEM program (EQS 6.1; Bentler, 2001) are included.  相似文献   

14.
Project IMPRESS is an interactive social science data analysis system used extensively at Dartmouth College and throughout the DTSS network. The programming techniques used to make it an unobtrusive time-sharing job and the user interface design considerations used to make it a system easy for both students and experienced researchers to run are described and their pedagogical and research values discussed.  相似文献   

15.
The Reduced Reparameterized Unified Model (Reduced RUM) is a diagnostic classification model for educational assessment that has received considerable attention among psychometricians. However, the computational options for researchers and practitioners who wish to use the Reduced RUM in their work, but do not feel comfortable writing their own code, are still rather limited. One option is to use a commercial software package that offers an implementation of the expectation maximization (EM) algorithm for fitting (constrained) latent class models like Latent GOLD or Mplus. But using a latent class analysis routine as a vehicle for fitting the Reduced RUM requires that it be re-expressed as a logit model, with constraints imposed on the parameters of the logistic function. This tutorial demonstrates how to implement marginal maximum likelihood estimation using the EM algorithm in Mplus for fitting the Reduced RUM.  相似文献   

16.
R, an open-source statistical language and data analysis tool, is gaining popularity among psychologists currently teaching statistics. R is especially suitable for teaching advanced topics, such as fitting the dichotomous Rasch model-a topic that involves transforming complicated mathematical formulas into statistical computations. This article describes R’s use as a teaching tool and a data analysis software program in the analysis of the Rasch model in item response theory. It also explains the theory behind, as well as an educator’s goals for, fitting the Rasch model with joint maximum likelihood estimation. This article also summarizes the R syntax for parameter estimation and the calculation of fit statistics. The results produced by R is compared with the results obtained from MINI STEP and the output of a conditional logit model. The use of R is encouraged because it is free, supported by a network of peer researchers, and covers both basic and advanced topics in statistics frequently used by psychologists.  相似文献   

17.
The term “multilevel meta-analysis” is encountered not only in applied research studies, but in multilevel resources comparing traditional meta-analysis to multilevel meta-analysis. In this tutorial, we argue that the term “multilevel meta-analysis” is redundant since all meta-analysis can be formulated as a special kind of multilevel model. To clarify the multilevel nature of meta-analysis the four standard meta-analytic models are presented using multilevel equations and fit to an example data set using four software programs: two specific to meta-analysis (metafor in R and SPSS macros) and two specific to multilevel modeling (PROC MIXED in SAS and HLM). The same parameter estimates are obtained across programs underscoring that all meta-analyses are multilevel in nature. Despite the equivalent results, not all software programs are alike and differences are noted in the output provided and estimators available. This tutorial also recasts distinctions made in the literature between traditional and multilevel meta-analysis as differences between meta-analytic choices, not between meta-analytic models, and provides guidance to inform choices in estimators, significance tests, moderator analyses, and modeling sequence. The extent to which the software programs allow flexibility with respect to these decisions is noted, with metafor emerging as the most favorable program reviewed.  相似文献   

18.
Although many nonlinear models of cognition have been proposed in the past 50 years, there has been little consideration of corresponding statistical techniques for their analysis. In analyses with nonlinear models, unmodeled variability from the selection of items or participants may lead to asymptotically biased estimation. This asymptotic bias, in turn, renders inference problematic. We show, for example, that a signal detection analysis of recognition memory data leads to asymptotic underestimation of sensitivity. To eliminate asymptotic bias, we advocate hierarchical models in which participant variability, item variability, and measurement error are modeled simultaneously. By accounting for multiple sources of variability, hierarchical models yield consistent and accurate estimates of participant and item effects in recognition memory. This article is written in tutorial format; we provide an introduction to Bayesian statistics, hierarchical modeling, and Markov chain Monte Carlo computational techniques.  相似文献   

19.
This editorial overview provides an introduction to the Suicide and Life‐Threatening Behaviors Special Issue: “Analytic and Methodological Innovations for Suicide‐Focused Research.” We outline several challenges faced by modern suicidologists, such as the need to integrate different analytical and methodological techniques from other fields with the unique data problems in suicide research. Therefore, the overall aim of this issue was to provide up‐to‐date methodological and analytical guidelines, recommendations, and considerations when conducting suicide‐focused research. The articles herein present this information in an accessible way for researchers/clinicians and do not require a comprehensive background in quantitative methods. We introduce the topics covered in this special issue, which include how to conduct power analyses using simulations, work with large data sets, use experimental therapeutics, and choose covariates, as well as open science considerations, decision‐making models, ordinal regression, machine learning, network analysis, and measurement considerations. Many of the topics covered in this issue provide step‐by‐step walkthroughs using worked examples with the accompanied code in free statistical programs (i.e., R). It is our hope that these articles provide suicidologists with valuable information and strategies that can help overcome some of the past limitations of suicide research, and improve the methodological rigor of our field.  相似文献   

20.
Structural vector autoregressive models (VARs) hold great potential for psychological science, particularly for time series data analysis. They capture the magnitude, direction of influence, and temporal (lagged and contemporaneous) nature of relations among variables. Unified structural equation modeling (uSEM) is an optimal structural VAR instantiation, according to large-scale simulation studies, and it is implemented within an SEM framework. However, little is known about the uniqueness of uSEM results. Thus, the goal of this study was to investigate whether multiple solutions result from uSEM analysis and, if so, to demonstrate ways to select an optimal solution. This was accomplished with two simulated data sets, an empirical data set concerning children's dyadic play, and modifications to the group iterative multiple model estimation (GIMME) program, which implements uSEMs with group- and individual-level relations in a data-driven manner. Results revealed multiple solutions when there were large contemporaneous relations among variables. Results also verified several ways to select the correct solution when the complete solution set was generated, such as the use of cross-validation, maximum standardized residuals, and information criteria. This work has immediate and direct implications for the analysis of time series data and for the inferences drawn from those data concerning human behavior.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号