首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
The Observer is a general-purpose software package for event recording and data analysis in behavioral research. It allows any IBM-type personal computer to serve as an event recorder. In addition, The Observer can generate dedicated event-recording programs for several types of non-IBM-compatible portable and hand-held computers and transfer files between the PC and such computers. The user specifies options through menus. The configuration can be either used directly for event recording on the PC or passed on to a program generator that creates a program to collect data on a hand-held computer. Observational data from either type of computer can be analyzed by the program. Event-recording configurations can be tailored to many different experimental designs. Keys can be designated as events, and modifiers can be used to indicate the limits of an event. The program allows grouping of events in classes and distinction between mutually exclusive versus nonexclusive events and duration events versus frequency events. Timing of events is accurate to 0.1 sec. An on-line electronic notepad permits notes to be made during an observation session. The program also includes on-line error correction. User comments as well as independent variables can be stored together with the observational data. During data analysis, the user can select the level of analysis and the type of output file. The Observer calculates frequency of occurrence and duration for classes of events, individual events, or combinations of events. For analysis of concurrence, one can select the number of nesting levels and the order of nesting. Output can be generated in the form of sorted event sequence files, text report files, and tabular ASCII files. The results can be exported to spreadsheet and statistical programs.  相似文献   

3.
Two general-purpose software packages for collecting and analyzing observational data from a variety of settings are discussed. One package is designed for coding mutually exclusive behavioral states using the Apple’s keyboard as an input device. The other is designed to monitor temporally overlapping behaviors, and it makes use of the Apple II’s built-in game-control button inputs to indicate up to three behavioral states that may occur simultaneously.  相似文献   

4.
5.
Three BASIC programs for processing observational, nonconcurrent sequential data are presented. The programs follow Sackett’s lag sequential analysis method and have the innovations of running on interactional microcomputers and of providing plots of results. The outcome of the analysis is stored on a magnetic disk, facilitating a further application of probabilistic models to the transformed data. The Allison-Liker correction for the comparison test between expected and observed lag probabilities is included in the programs.  相似文献   

6.
In this article, we describe the Interval Manager (INTMAN) software system for collecting timesampled observational data and present a preliminary application comparing the program with a traditional paper-and-pencil method. INTMAN is a computer-assisted alternative to traditional paper-and-pencil methods for collecting fixed interval time-sampled observational data. The INTMAN data collection software runs on Pocket PC handheld computers and includes a desktop application for Microsoft Windows that is used for data analysis. Standard analysis options include modified frequencies, percent of intervals, conditional probabilities, and kappa agreement matrices and values. INTMAN and a standardized paper-and-pencil method were compared under identical conditions on five dimensions: setup time, duration of data entry, duration of interobserver agreement calculations, accuracy, and cost. Overall, the computer-assisted program was a more efficient and accurate data collection system for time-sampled data than the traditional method.  相似文献   

7.
In this paper, two methods of sequential analysis are applied to hypothetical observational data. The first method employs the conventional “conditional probability” approach, illustrated using the GSEQ program (Bakeman & Quera, 1995). In order to overcome some of the difficulties associated with the conditional probability approach, the second method employs a new “normalized and pooled” approach. Essentially, by normalizing periods of time preceding, during, and following each occurrence of a nominated “given” behavior, the proportion of time units devoted to a “target” behavior can be estimated and then pooled across all occurrences of the given behavior. A summary diagram representing the likelihood that the target behavior precedes, occurs concurrently with, and follows the given behavior can then be constructed. Elements of this summary diagram can also be quantified. Given the graphical nature of the output, and its ease of use, the normalized and pooled approach may help to promote the use of sequential analysis in applied settings.  相似文献   

8.
As illustrated by the mistaken, high-profile fingerprint identification of Brandon Mayfield in the Madrid Bomber case, and consistent with a recent critique by the National Academy of Sciences (2009), it is clear that the forensic sciences are subject to contextual bias and fraught with error. In this article, we describe classic psychological research on primacy, expectancy effects, and observer effects, all of which indicate that context can taint people's perceptions, judgments, and behaviors. Then we describe recent studies indicating that confessions and other types of information can set into motion forensic confirmation biases that corrupt lay witness perceptions and memories as well as the judgments of experts in various domains of forensic science. Finally, we propose best practices that would reduce bias in the forensic laboratory as well as its influence in the courts.  相似文献   

9.
10.
This study was conducted to provide standardization data and information on the reliability and factorial validity of the recently developed Adolescent Activities Checklist (AAC). A total of 563 adolescents in grades 7 through 12 served as subjects. Significant main effects for gender, race, and grade were obtained in a multivariate analysis of variance. On the basis of this information, standardization data were established for these three variables. Further investigation indicated that the internal consistency of the AAC was high. In addition, results of a principal components analysis conducted on the frequencies of the Unpleasant and Pleasant Activities subscales revealed four and three factors, respectively. For unpleasant activities, the major dimensions were found to occur in three situations-namely, social interactions, family situations, and school situations. Stressful events also occurred as one of the four unpleasant activities dimensions. For pleasant activities, three dimensions appeared: heterosocial behavior, reinforcing interpersonal situations, and social reinforcement.  相似文献   

11.
We summarize five studies of our large-scale research program, in which we examined aspects of contour-based object identification and segmentation, and we report on the stimuli we used, the norms and data we collected, and the software tools we developed. The stimuli were outlines derived from the standard set of line drawings of everyday objects by Snodgrass and Vanderwart (1980). We used contour curvature as a major variable in all the studies. The total number of 1,500 participants produced very solid, normative identification rates of silhouettes and contours, straight-line versions, and fragmented versions, and quite reliable benchmark data about saliency of points and object segmentation into parts. We also developed several software tools to generate stimuli and to analyze the data in nonstandard ways. Our stimuli, norms and data, and software tools have great potential for further exploration of factors influencing contour-based object identification, and are also useful for researchers in many different disciplines (including computer vision) on a wide variety of research topics (e.g., priming, agnosia, perceptual organization, and picture naming). The full set of norms, data, and stimuli may be downloaded fromwww.psychonomic.org/archive/.  相似文献   

12.
13.
14.
We summarize five studies of our large-scale research program, in which we examined aspects of contour-based object identification and segmentation, and we report on the stimuli we used, the norms and data we collected, and the software tools we developed. The stimuli were outlines derived from the standard set of line drawings of everyday objects by Snodgrass and Vanderwart (1980). We used contour curvature as a major variable in all the studies. The total number of 1,500 participants produced very solid, normative identification rates of silhouettes and contours, straight-line versions, and fragmented versions, and quite reliable benchmark data about saliency of points and object segmentation into parts. We also developed several software tools to generate stimuli and to analyze the data in nonstandard ways. Our stimuli, norms and data, and software tools have great potential for further exploration of factors influencing contour-based object identification, and are also useful for researchers in many different disciplines (including computer vision) on a wide variety of research topics (e.g., priming, agnosia, perceptual organization, and picture naming). The full set of norms, data, and stimuli may be downloaded from www.psychonomic.org/archive/.  相似文献   

15.
16.
A software program written to collect real-time, observational data is described. The flexible program allows customized behavior codes and observational durations and simultaneously records both timed and counted events. The data are collected by means of single keystrokes, automatically stored to disk with 100th of a second resolution and summarized for each observational session. The program’s database files are dBASE III PLUS compatible and may be browsed, edited, or converted to ASCII files from the program’s main menu. Field testing demonstrated the efficiency and interobserver reliability of the program (for frequency,r=0.81; for durational behaviors,rs=0.89 and 0.96). The software operates on IBM XT/AT/PS 2s and most clones with PC/MS DOS version 2.0 or greater.  相似文献   

17.
18.
Reliability of observational data was measured simultaneously by two assessors under two experimental conditions. During overt assessment, observers were told that reliability would be measured by one of the two assessors, thus permitting computation of reliability with an identified and an unidentified assessor. During covert assessment, observers were not informed of the reliability measured. Throughout the study, each of the assessors employed a unique version of a standard observational code. In the overt assessment condition, reliability of observers with the identified assessor was consistently higher than reliability with the unidentified assessor, indicating that observers modified their observational criteria to approximate those of the identified assessor. In the covert assessment condition, reliability with the two assessors was substantially lower than during overt assessment. Further, observers consistently recorded lower frequencies of disruptive behavior than the two assessors during covert assessment.  相似文献   

19.
Many observational systems in basic and applied research produce a record of sequences of events over time. Within such observational systems, important information may be found in the frequency of the transitions between events that does not emerge in the typical researcher's focus on absolute event frequency. Indeed, if many behaviors are controlled by closely adjacent preceding events, then substantial prediction and control can be obtained through knowledge and manipulation of causal event transitions. A method for analyzing and testing sequential dependencies between events is proposed as part of an integrated package of computer-based data entry, storage, and analysis procedures. The mathematical portion of these techniques is based on the statistic, kappa,which is applicable to determining whether particular transitions among events differ from chance and whether particular transitions differ significantly across groups of subjects. Low-cost hardware and software to implement the proposed procedures are described.This work has been supported by Grant 1 RO1 HD19245-01A1 from the National Institute of Child Health and Human Development to the two senior authors and by grants of equipment from Commodore Computers, Inc., Koala Corporation, and NEC Telephones, Inc.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号