首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The computer program ALICE solves the two major problems of data manipulation and analysis. First, ALICE allows the user to treat data from an experiment in the form they are generated. Second, mathematical calculations and statistical analyses are included as an intrinsic part of the multidimensional approach to data handling. ALICE accepts raw data in the form and order they were collected; reorganizes, partitions, or selects any subset of them (including a single entry), and arithmetically combines, transforms, or evaluates any formula involving them. Furthermore, learning to use ALICE is simple, even for those who are naive to both computers and data analysis.  相似文献   

3.
C-quence is a software application that matches sequential patterns of qualitative data specified by the user and calculates the rate of occurrence of these patterns in a data set. Although it was designed to facilitate analyses of face-to-face interaction, it is applicable to any data set involving categorical data and sequential information. C-quence queries are constructed using a graphical user interface. The program does not limit the complexity of the sequential patterns specified by the user.  相似文献   

4.
5.
This paper presents a PC-based eye-position data collection and analysis system. Software routines are described that supplement hardware calibration procedures, improving data-collection accuracy and reducing the number of unusable trials. Collected eye-position data can be remapped over a displayed stimulus image and spatially and temporally represented by parameters such as individual fixations, clusters of fixations (Nodine, Carmody, & Kundel, 1978), cumulative clusters, and gaze durations. An important feature of the system is that the software routines preserve scan-path information that provides a sequential dimension to the analysis of eye-position data. A “hotspot” analysis is also described, which cumulates, across 1 or more observers, the frequency of eye-position landings or “hits” on designated areas of interest for a given stimulus. Experimental applications in the fields of radiology, psychology, and art are provided, illustrating how eye-position data can be interpreted both in signal detection and in information-processing frameworks using the present methods of analysis.  相似文献   

6.
Some computational and statistical techniques that can be used in the analysis of event-related potential (ERP) data are demonstrated. The techniques are fairly elementary but go one step further than do simple area measurement or peak picking, which are most often used in ERP analysis. Both amplitude and latency measurement techniques are considered. Principal components analysis (PCA) and methods for electromyographic onset determination are presented in detail, and Woody filtering is discussed briefly. The techniques are introduced in a nontechnical, tutorial review style. One and the same existing data set is presented, to which the techniques are applied, and practical guidelines for their use are given. The methods are demonstrated against a background of theoretical notions that are related to the definition of ERP components.  相似文献   

7.
When analyzing genetic data, Structural Equations Modeling (SEM) provides a straightforward methodology to decompose phenotypic variance using a model-based approach. Furthermore, several models can be easily implemented, tested, and compared using SEM, allowing the researcher to obtain valuable information about the sources of variability. This methodology is briefly described and applied to re-analyze a Spanish set of IQ data using the biometric ACE model. In summary, we report heritability estimates that are consistent with those of previous studies and support substantial genetic contribution to phenotypic IQ; around 40% of the variance can be attributable to it. With regard to the environmental contribution, shared environment accounts for 50% of the variance, and non-shared environment accounts for the remaining 10%. These results are discussed in the text.  相似文献   

8.
9.
RSCORE-J, a computer program for a signal-detection analysis of pooled rating-method data, is listed and described. RSCORE-J computes jackknife estimates of ROC parameters and their standard errors from rating-method data pooled over a group of observers.  相似文献   

10.
Computer-based studies usually produce log files as raw data. These data cannot be analyzed adequately with conventional statistical software. The Chemnitz LogAnalyzer provides tools for quick and comfortable visualization and analyses of hypertext navigation behavior by individual users and for aggregated data. In addition, it supports analogous analyses of questionnaire data and reanalysis with respect to several predefined orders of nodes of the same hypertext. As an illustration of how to use the Chemnitz LogAnalyzer, we give an account of one study on learning with hypertext. Participants either searched for specific details or read a hypertext document to familiarize themselves with its content. The tool helped identify navigation strategies affected by these two processing goals and provided comparisons, for example, of processing times and visited sites. Altogether, the Chemnitz LogAnalyzer fills the gap between log files as raw data of Web-based studies and conventional statistical software.  相似文献   

11.
12.
13.
A system designed for use in complex operant conditioning experiments is described. Some of its key features are: (a) plugboards that permit the experimenter to change either from one program to another or from one analysis to another in less than a minute, (b) time-sharing of permanently-wired, electronic logic components, (c) recordings suitable for automatic analyses. Included are flow diagrams of the system and sample logic diagrams for programming experiments and for analyzing data.  相似文献   

14.
This paper describes simple and flexible programs for analyzing lag-sequential categorical data, using SAS and SPSS. The programs read a stream of codes and produce a variety of lag-sequential statistics, including transitional frequencies, expected transitional frequencies, transitional probabilities, adjusted residuals, z values, Yule’s Q values, likelihood ratio tests of stationarity across time and homogeneity across groups or segments, transformed kappas for unidirectional dependence, bidirectional dependence, parallel and nonparallel dominance, and significance levels based on both parametric and randomization tests.  相似文献   

15.
16.
In many human movement studies angle-time series data on several groups of individuals are measured. Current methods to compare groups include comparisons of the mean value in each group or use multivariate techniques such as principal components analysis and perform tests on the principal component scores. Such methods have been useful, though discard a large amount of information. Functional data analysis (FDA) is an emerging statistical analysis technique in human movement research which treats the angle-time series data as a function rather than a series of discrete measurements. This approach retains all of the information in the data. Functional principal components analysis (FPCA) is an extension of multivariate principal components analysis which examines the variability of a sample of curves and has been used to examine differences in movement patterns of several groups of individuals. Currently the functional principal components (FPCs) for each group are either determined separately (yielding components that are group-specific), or by combining the data for all groups and determining the FPCs of the combined data (yielding components that summarize the entire data set). The group-specific FPCs contain both within and between group variation and issues arise when comparing FPCs across groups when the order of the FPCs alter in each group. The FPCs of the combined data may not adequately describe all groups of individuals and comparisons between groups typically use t-tests of the mean FPC scores in each group. When these differences are statistically non-significant it can be difficult to determine how a particular intervention is affecting movement patterns or how injured subjects differ from controls. In this paper we aim to perform FPCA in a manner allowing sensible comparisons between groups of curves. A statistical technique called common functional principal components analysis (CFPCA) is implemented. CFPCA identifies the common sources of variation evident across groups but allows the order of each component to change for a particular group. This allows for the direct comparison of components across groups. We use our method to analyze a biomechanical data set examining the mechanisms of chronic Achilles tendon injury and the functional effects of orthoses.  相似文献   

17.
The Type I error rates and powers of three recent tests for analyzing nonorthogonal factorial designs under departures from the assumptions of homogeneity and normality were evaluated using Monte Carlo simulation. Specifically, this work compared the performance of the modified Brown-Forsythe procedure, the generalization of Box's method proposed by Brunner, Dette, and Munk, and the mixed-model procedure adjusted by the Kenward-Roger solution available in the SAS statistical package. With regard to robustness, the three approaches adequately controlled Type I error when the data were generated from symmetric distributions; however, this study's results indicate that, when the data were extracted from asymmetric distributions, the modified Brown-Forsythe approach controlled the Type I error slightly better than the other procedures. With regard to sensitivity, the higher power rates were obtained when the analyses were done with the MIXED procedure of the SAS program. Furthermore, results also identified that, when the data were generated from symmetric distributions, little power was sacrificed by using the generalization of Box's method in place of the modified Brown-Forsythe procedure.  相似文献   

18.
19.
20.
This report concerns the use of the PET 2001–8 microcomputer (Commodore) for taking observational data on human interactions in a classroom setting. The program enables the observer to record several different types of behavior as they occur in time and provides for calculations of frequency, duration, and latency data for the behaviors observed, using the real-time clock built into the PET. During field testing in a classroom, an observer can accurately record sequential teacher-student interactions. Because of the integration of processor, TV monitor, keyboard, and cassette deck, the PET is very portable, which increases it flexibility for observing behaviors in naturalistic settings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号