首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In primary data analysis the individuals who collect the data also analyze it; for meta-analysis an investigator quantitatively combines the statistical results from multiple studies of a phenomenon to reach a conclusion; in secondary data analysis individuals who were not involved in the collection of the data analyze the data. Secondary data analysis may be based on the published data or it may be based on the original data. Most studies of animal cognition involve primary data analysis; it was difficult to identify any that were based on meta-analysis; secondary data analysis based on published data has been used effectively, and examples are given from the research of John Gibbon on scalar timing theory. Secondary data analysis can also be based on the original data if the original data are available in an archive. Such an archive in the field of animal cognition is feasible and desirable.  相似文献   

2.
There are many data collection procedures used during discrete trial teaching including first‐trial data collection, probe data, trial‐by‐trial data collection, and estimation data. Continuous, or trial‐by‐trial data collection, consists of the interventionist collecting data on learner behavior on each trial. Estimation data consists of the interventionist estimating learner performance after a teaching session using a rating scale. The purpose of the present study was to compare trial‐by‐trial data collection to estimation data collection during discrete trial teaching to teach children expressive labels. The data collection procedures were examined in terms of accuracy of data collection, efficiency of teaching (i.e., number of trials delivered per session), and rate of child acquisition of targets. Results of the adapted alternating treatment design replicated across three participants and multiple targets found estimation data collection to be as accurate as trial‐by‐trial data collection in determining mastery of targets. Estimation data collected by the interventionist was also found to be accurate when compared to the actual trial‐by‐trial data collected after the study concluded.  相似文献   

3.
Unofficial data are empirical findings that guide our research but are generally not reported. This article delineates four forms of unofficial data: casual observation of ourselves and others, unsystematic naturalistic observation, uncodable forms of clinical and phenomenological data, and accidental and nonquantifiable incidents and findings arising during pilot testing and data analysis. The article argues for a broadened conception of empiricism that recognizes unofficial data as data, explores the different contexts of the scientific process in which official and unofficial data are useful, and suggests the implications of the existence and utility of unofficial data for research and publication practices.  相似文献   

4.
Few general-purpose computer programs are available that analyze sequential categorical data. If there were a sequential data interchange standard—a standard way of representing sequential data—then it would be more attractive to write general-purpose computer programs for such data Moreover, interlaboratory sharing would be facilitated. The present paper defines such a standard, called the sequential data interchange standard, or SDIS. Both the SDIS data language and a parsing program for data that follow SDIS conventions are described. The parsing program will be made available to researchers who wish to develop analysis programs for sequential data  相似文献   

5.
方差分量估计是进行概化理论分析的关键。采用MonteCarlo模拟技术,探讨心理与教育测量数据分布对概化理论各种方法估计方差分量的影响。数据分布包括正态、二项和多项分布,估计方法包括Traditional、Jackknife、Bootstrap和MCMC方法。结果表明:(1)Traditional方法估计正态分布和多项分布数据的方差分量相对较好,估计二项分布数据需要校正,Jackknife方法准确地估计了三种分布数据的方差分量,校正的Bootstrap方法和有先验信息的MCMC方法(MCMCinf)估计三种分布数据的方差分量结果较好;(2)心理与教育测量数据分布对四种方法估计概化理论方差分量有影响,数据分布制约着各种方差分量估计方法性能的发挥,需要加以区分地使用。  相似文献   

6.
Transportation agencies and researchers are optimistic about the potential use of data collected from connected vehicles (CVs) for a variety of traffic and transportation applications. However, the literature lacks the evaluation of data sharing intention of the public for CV applications and its relationship with CV acceptance. This study investigated this gap by conducting a questionnaire survey of 2400 US adults. The results showed that the intention to share CV data depends upon the use of data but not the type of data. The possible uses of CV data were found to be grouped under four categories: driver information, congestion assessment and reduction, and pavement and infrastructure assessment and improvement (ICP); enforcement of traffic rules and fees based on usage (EF); roadside assistance and crash investigation (RC); and research purposes (RP). The data sharing intention for these four data uses vary, though with some commonality, which reflects the overall data sharing intention in CV technology (CVT). In addition, it was found that data privacy and security issues of CVT lower the data sharing intention and CV acceptance. Thus, a number of ways to improve CV acceptance by minimizing the data issues of CVT are discussed. Significant differences in perception of data privacy and security, data sharing intention, and CV acceptance were observed for individuals of different socio-economic and driving-related characteristics.  相似文献   

7.
缺失值是社会科学研究中非常普遍的现象。全息极大似然估计和多重插补是目前处理缺失值最有效的方法。计划缺失设计利用特殊的实验设计有意产生缺失值, 再用现代的缺失值处理方法来完成统计分析, 获得无偏的统计结果。计划缺失设计可用于横断面调查减少(或增加)问卷长度和纵向调查减少测量次数, 也可用于提高测量有效性。常用的计划缺失设计有三式设计和两种方法测量。  相似文献   

8.
Scott Jacques 《Deviant behavior》2013,34(12):1543-1552
ABSTRACT

Offenders and nonoffenders possess valuable information about crime. But which possesses the best data? This is a complex issue, so I narrow my focus to data on empirical aspects of criminal events. Drawing on the necessary conditions perspective, I theorize that a source’s possession 1) of data varies directly with its involvement in cases; 2) of representative data varies inversely with nonrandom involvement in cases and nonrandom siphoning off from the larger group to which it belongs; and, 3) of accurate data varies inversely with time since involvement in cases. Those general principles suggest that offenders, especially active ones, possess the most data, representative data, and accurate data on empirical aspects of criminal events. I conclude by discussing the implications of those general principles for observation research, sources’ possession of subjective data, and their possession of empirical data on other criminological events, specifically victimization and policing.  相似文献   

9.
MISSING DATA: A CONCEPTUAL REVIEW FOR APPLIED PSYCHOLOGISTS   总被引:9,自引:0,他引:9  
There has been conspicuously little research concerning missing data problems in the applied psychology literature. Fortunately, other fields have begun to investigate this issue. These include survey research, marketing, statistics, economics, and biometrics. A review of this literature suggests several trends for applied psychologists. For example, listwise deletion of data is often the least accurate technique to deal with missing data. Other methods for estimating missing data scores may be more accurate and preserve more data for investigators to analyze. Further, the literature reveals that the amount of missing data and the reasons for deletion of data impact how investigators should handle the problem. Finally, there is a great need for more investigation of strategies for dealing with missing data, especially when data are missing in nonrandom or systematic patterns.  相似文献   

10.
Michael Fuller 《Zygon》2015,50(3):569-582
The advent of extremely large data sets, known as “big data,” has been heralded as the instantiation of a new science, requiring a new kind of practitioner: the “data scientist.” This article explores the concept of big data, drawing attention to a number of new issues—not least ethical concerns, and questions surrounding interpretation—which big data sets present. It is observed that the skills required for data scientists are in some respects closer to those traditionally associated with the arts and humanities than to those associated with the natural sciences; and it is urged that big data presents new opportunities for dialogue, especially concerning hermeneutical issues, for theologians and data scientists.  相似文献   

11.
Best practices for missing data management in counseling psychology   总被引:3,自引:0,他引:3  
This article urges counseling psychology researchers to recognize and report how missing data are handled, because consumers of research cannot accurately interpret findings without knowing the amount and pattern of missing data or the strategies that were used to handle those data. Patterns of missing data are reviewed, and some of the common strategies for dealing with them are described. The authors provide an illustration in which data were simulated and evaluate 3 methods of handling missing data: mean substitution, multiple imputation, and full information maximum likelihood. Results suggest that mean substitution is a poor method for handling missing data, whereas both multiple imputation and full information maximum likelihood are recommended alternatives to this approach. The authors suggest that researchers fully consider and report the amount and pattern of missing data and the strategy for handling those data in counseling psychology research and that editors advise researchers of this expectation.  相似文献   

12.
This paper first defines the context of automated data analysis systems and then mentions approaches to analysis of data by systems of this type. It then briefly describes the BEHAVE data structure and proceeds to a discussion of the BEHAVE data analysis package which provides the user with summary information concerning organization of data.  相似文献   

13.
Missing data, such as item responses in multilevel data, are ubiquitous in educational research settings. Researchers in the item response theory (IRT) context have shown that ignoring such missing data can create problems in the estimation of the IRT model parameters. Consequently, several imputation methods for dealing with missing item data have been proposed and shown to be effective when applied with traditional IRT models. Additionally, a nonimputation direct likelihood analysis has been shown to be an effective tool for handling missing observations in clustered data settings. This study investigates the performance of six simple imputation methods, which have been found to be useful in other IRT contexts, versus a direct likelihood analysis, in multilevel data from educational settings. Multilevel item response data were simulated on the basis of two empirical data sets, and some of the item scores were deleted, such that they were missing either completely at random or simply at random. An explanatory IRT model was used for modeling the complete, incomplete, and imputed data sets. We showed that direct likelihood analysis of the incomplete data sets produced unbiased parameter estimates that were comparable to those from a complete data analysis. Multiple-imputation approaches of the two-way mean and corrected item mean substitution methods displayed varying degrees of effectiveness in imputing data that in turn could produce unbiased parameter estimates. The simple random imputation, adjusted random imputation, item means substitution, and regression imputation methods seemed to be less effective in imputing missing item scores in multilevel data settings.  相似文献   

14.
心理学研究数据大致分为截面数据、时间序列数据和面板数据, 三种数据类型的分析方法及使用前提因数据属性不同而有所不同。心理学截面数据的统计方法过于依赖模型的线性结构和假设条件等, 在处理心理学面板数据中难以充分发挥统计方法的功用。函数型数据分析方法主要适用于面板数据处理, 特别适宜ERP、fMRI、发展心理等心理实验中存在时间序列的面板数据的统计分析, 为心理学研究提供了有力的新工具。  相似文献   

15.
Results are described for a survey assessing prevalence of missing data and reporting practices in studies with missing data in a random sample of empirical research journal articles from the PsychINFO database for the year 1999, two years prior to the publication of a special section on missing data in Psychological Methods. Analysis indicates missing data problems were found in about one-third of the studies. Further, analytical methods and reporting practices varied widely for studies with missing data. One may consider these results as baseline data to assess progress as reporting standards evolve for studies with missing data. Some potential reporting standards are discussed.  相似文献   

16.
Gait data are typically collected in multivariate form, so some multivariate analysis is often used to understand interrelationships between observed data. Principal Component Analysis (PCA), a data reduction technique for correlated multivariate data, has been widely applied by gait analysts to investigate patterns of association in gait waveform data (e.g., interrelationships between joint angle waveforms from different subjects and/or joints). Despite its widespread use in gait analysis, PCA is for two-mode data, whereas gait data are often collected in higher-mode form. In this paper, we present the benefits of analyzing gait data via Parallel Factor Analysis (Parafac), which is a component analysis model designed for three- or higher-mode data. Using three-mode joint angle waveform data (subjects×time×joints), we demonstrate Parafac's ability to (a) determine interpretable components revealing the primary interrelationships between lower-limb joints in healthy gait and (b) identify interpretable components revealing the fundamental differences between normal and perturbed subjects' gait patterns across multiple joints. Our results offer evidence of the complex interconnections that exist between lower-limb joints and limb segments in both normal and abnormal gaits, confirming the need for the simultaneous analysis of multi-joint gait waveform data (especially when studying perturbed gait patterns).  相似文献   

17.
Software designed for the microprocessor-based psychopathology laboratory provides a powerful data collection and maintenance package. A library of prepackaged well-documented program modules has been developed to aid in developing and maintaining programs to administer experiments, to store, retrieve, and interpret data. A simple, but powerful, data base system allows new experiments to be incorporated easily, and obsolete experiments to be deleted. The data collection programs are independent of the general data base, permitting them to be moved to dedicated remote systems as necessary. The data are stored in raw form to permit the researcher to try novel approaches in interpreting existing data.  相似文献   

18.
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号