首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Time-series analysis procedures for analyzing behavioral data are receiving increasing support. However, several authorities strongly recommend using at least 50–100 points per experimental phase. A complex mathematical model must then be empirically developed using computer programs to extract serial dependency from the data before the effects of treatment interventions can be evaluated. The present discussion provides a simple method of evaluating intervention effects that can be used with as few as 8 points per experimental phase. The calculations are easy enough to do by hand.  相似文献   

2.
Wendy M. Yen 《Psychometrika》1985,50(4):399-410
When the three-parameter logistic model is applied to tests covering a broad range of difficulty, there frequently is an increase in mean item discrimination and a decrease in variance of item difficulties and traits as the tests become more difficult. To examine the hypothesis that this unexpected scale shrinkage effect occurs because the items increase in complexity as they increase in difficulty, an approximate relationship is derived between the unidimensional model used in data analysis and a multidimensional model hypothesized to be generating the item responses. Scale shrinkage is successfully predicted for several sets of simulated data.The author is grateful to Robert Mislevy for kindly providing a copy of his computer program, RESOLVE.  相似文献   

3.
The specific EEG manifestations of epilepsy, seizures, and interictal spikes and sharp waves occur at unpredictable times and at variable frequencies. To obtain an adequate diagnosis, it is often necessary to record the EEG for several hours or several days. A computer system was developed to perform data reduction and quantification by continuously monitoring seizures and automatically recognizing interictal spikes and sharp waves. The past 2 min of EEG are kept on the computer disk at every instant. When an epileptic seizure occurs or when a spike is detected, a sample of EEG, including a section preceding the event itself, is written on the EEG polygraph and on magnetic tape. A continuous recording is thus replaced by samples of varying lengths, containing only the important aspects of the EEG, reducing considerably the original data. After the monitoring session, the spatial and temporal distributions of the interictal activity are presented in a quantified form on the terminal. The seizures are recorded on digital tape and are available for several types of processing. The patient is also monitored by a video system; EEG and video are synchronized by a time-of-day clock to allow electroclinical correlations.  相似文献   

4.
This article aims to develop a new account of scientific explanation for computer simulations. To this end, two questions are answered: what is the explanatory relation for computer simulations? And what kind of epistemic gain should be expected? For several reasons tailored to the benefits and needs of computer simulations, these questions are better answered within the unificationist model of scientific explanation. Unlike previous efforts in the literature, I submit that the explanatory relation is between the simulation model and the results of the simulation. I also argue that our epistemic gain goes beyond the unificationist account, encompassing a practical dimension as well.  相似文献   

5.
Additive similarity trees   总被引:20,自引:0,他引:20  
Similarity data can be represented by additive trees. In this model, objects are represented by the external nodes of a tree, and the dissimilarity between objects is the length of the path joining them. The additive tree is less restrictive than the ultrametric tree, commonly known as the hierarchical clustering scheme. The two representations are characterized and compared. A computer program, ADDTREE, for the construction of additive trees is described and applied to several sets of data. A comparison of these results to the results of multidimensional scaling illustrates some empirical and theoretical advantages of tree representations over spatial representations of proximity data.We thank Nancy Henley and Vered Kraus for providing us with data, and Jan deLeeuw for calling our attention to relevant literature. The work of the first author was supported in part by the Psychology Unit of the Israel Defense Forces.  相似文献   

6.
The three-dimensional graphic method for quantifying body position is a series of observer procedures and computer programs designed to yield three-dimensional (height, width, and depth) coordinates for various body points. These coordinates can be graphed by computer in several different ways, and can be analyzed mathematically to provide information about a wide variety of variables, including interpersonal distance and body activity. The procedure for collecting and analyzing the data is explained and the computer programs developed for the method are described.  相似文献   

7.
8.
Professor Iacobucci has provided a useful introduction to the computer program LISREL, as well as to several technical topics in structural equation modeling (SEM). However, SEM has not been synonymous with LISREL for several decades, and focusing on LISREL's 13 Greek matrices and vectors is not the most intuitive way to learn SEM. It is possible today to do model specification via a path diagram without any need for filling in matrix elements. The simplest alternative is based on the Bentler–Weeks model, whose basic concepts are reviewed. Selected additional SEM topics are discussed, including some recent developments and their practical implications. New simulation results on model fit under null and alternative hypotheses are also presented that are consistent with statistical theory but in part seem to contradict those reported by Iacobucci.  相似文献   

9.
This study aims at verifying the inner validity and logic of a squash competition decision-making model through the use of computer simulation. The model defines the cognitive-decisional strategy of the defending player (D) when selecting a motor reaction in response to his opponent's shot. Computer simulation of the model was carried out on a PDP-10 computer using a recent version of UCI-LISP. Protocol analysis data pertaining to the nature of the information D processes when awaiting the attacking player's shot were fed into the simulation program in order to examine the extent to which the model can reproduce decisions reached in various defensive contexts. Simulation results reveal that the proposed model can account for a substantial part of the variation in the speed and accuracy of D's motor reaction in real sport situations. Several factors like time pressure, expectancies, uncertainty, recency and familiarity of the relationship between signal and response appear to affect D's motor response via the cognitive-decisional strategy employed by the defending player. Particular discrepancies observed between simulation results and decisions reached by expert players in specific defensive situations nevertheless indicate that the decision rule utilized within the present model needs to be refined. In this regard, several issues are discussed and suggestions for further simulation studies are put forward in order to account more precisely for the various features characterizing the defensive player's motor reaction in real sporting context.  相似文献   

10.
The choice of computer courses has a direct influence on the development of computer literacy. It is alarming, therefore, that girls seem to choose computer courses less frequently than boys. The present paper examines (a) whether these often-reported gender differences also occur at the early high school level (Study 1) and (b) how these differences can be predicted by applying an expectancy-value model to the domain of computing (Study 2). Both studies clearly show gender differences in the choice of computer courses in children between 10 and 16 years in the real-life situation of choosing courses at school. In Study 2, the suggested expectancy-value model is tested using data from 159 students and 137 parents. The model shows a good fit to the data, and the observed gender differences in the choice of computer course could be predicted by differences in the value placed on computers and the expectations of success. However, these differences could only be partly explained by differences in perceived parental attitudes, and there were only weak relationships between parental attitudes and the corresponding perceptions of the students. Educational implications of the findings and suggestions for future research are discussed.  相似文献   

11.
This article presents the results of a quantitative study (n = 1,058) of the gender divide in ICT attitudes. In general, females had more negative attitudes towards computers and the Internet than did men. Results indicate a positive relationship between ICT experience and ICT attitudes. This experience is measured by period of time using a computer and self-perceived computer and Internet experience. Further analyses on the impact of gender on this correlation of ICT experience and ICT attitudes were conducted by means of a multivariate model. General Linear Model (GLM) analysis revealed that there was a significant effect of gender, computer use, and self-perceived computer experience on computer anxiety attitudes, as well as several significant interaction effects. Males were found to have less computer anxiety than females; respondents who have used computers for a longer period of time and respondents with a higher self-perception of experience also show less computer anxiety. However, the GLM plot shows that the influence of computer experience works in different ways for males and females. Computer experience has a positive impact on decreasing computer anxiety for men, but a similar effect was not found for women. The model was also tested for computer liking and Internet-liking factors.  相似文献   

12.
Extended similarity trees   总被引:1,自引:0,他引:1  
Proximity data can be represented by an extended tree, which generalizes traditional trees by including marked segments that correspond to overlapping clusters. An extended tree is a graphical representation of the distinctive features model. A computer program (EXTREE) that constructs extended trees is described and applied to several sets of conceptual and perceptual proximity data.This research was supported in part by a National Science Foundation Pre-doctoral Fellowship to the first author.A magnetic tape containing both the EXTREE program described in the article and ADDTREE/P program for fitting additive trees can also be obtained from the above address. Requests for the program should be accompanied by a check for $25 made out to Teachers College, to cover the costs of the tape and postage.  相似文献   

13.
Original, open‐source computer software was developed and validated against established delay discounting methods in the literature. The software executed approximate Bayesian model selection methods from user‐supplied temporal discounting data and computed the effective delay 50 (ED50) from the best performing model. Software was custom‐designed to enable behavior analysts to conveniently apply recent statistical methods to temporal discounting data with the aid of a graphical user interface (GUI). The results of independent validation of the approximate Bayesian model selection methods indicated that the program provided results identical to that of the original source paper and its methods. Monte Carlo simulation (n = 50,000) confirmed that true model was selected most often in each setting. Simulation code and data for this study were posted to an online repository for use by other researchers. The model selection approach was applied to three existing delay discounting data sets from the literature in addition to the data from the source paper. Comparisons of model selected ED50 were consistent with traditional indices of discounting. Conceptual issues related to the development and use of computer software by behavior analysts and the opportunities afforded by free and open‐sourced software are discussed and a review of possible expansions of this software are provided.  相似文献   

14.
The value of the results of the inverse dynamic analysis procedures used in the study of human tasks is dependent on the quality of the kinematic and kinetic data supplied to the biomechanical model that supports it. The kinematic data, containing the position, velocity and acceleration of all anatomical segments of the biomechanical model, result from the reconstruction of human spatial motion by means of the evaluation of the anatomic points positions that enable to uniquely define the position of all anatomical segments. Furthermore, the motion data must be kinematically consistent with the structure of the biomechanical model used in the analysis. The traditional photogrammetric methodologies used for the spatial reconstruction of the human motion require images of two or more calibrated and synchronized cameras. This is due to the fact that the projection of each anatomical point is described by two linear equations relating its three spatial coordinates with the two coordinates of the projected point. The need for the image of another camera arises from the fact that a third equation is necessary to find the original spatial position of the anatomical point. The methodology proposed here substitutes the projection equations of the second camera with the kinematic constraint equations associated with a biomechanical model in the motion reconstruction process. In the formulation the system of equations arising from the point projections and biomechanical model kinematic constraints, representing the constant length of the anatomical segments, are solved simultaneously. Because the system of equations has multiple solutions for each image, a strategy based on the minimization of a cost function associated to the smoothness of the reconstructed motion is devised. It is shown how the process is implemented computationally avoiding any operator intervention during the motion reconstruction for a given time period. This leads to an automated computer procedure that ensures the uniqueness of the reconstructed motion. The result of the reconstruction process is a set of data that is kinematically consistent with the biomechanical model used. Through applications of the proposed methodology to several sports exercises its benefits and shortcomings are discussed.  相似文献   

15.
Hardware and software modifications are presented that allow for collection and recognition by a Commodore computer of spoken responses. Responses are timed with millisecond accuracy and automatically analyzed and scored. Accuracy data for this device from several experiments are presented. Potential applications and suggestions for improving recognition accuracy are also discussed.  相似文献   

16.
Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people's goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the reduction thereof. However, a variety of alternative entropy metrics (Hartley, Quadratic, Tsallis, Rényi, and more) are popular in the social and the natural sciences, computer science, and philosophy of science. Particular entropy measures have been predominant in particular research areas, and it is often an open issue whether these divergences emerge from different theoretical and practical goals or are merely due to historical accident. Cutting across disciplinary boundaries, we show that several entropy and entropy reduction measures arise as special cases in a unified formalism, the Sharma–Mittal framework. Using mathematical results, computer simulations, and analyses of published behavioral data, we discuss four key questions: How do various entropy models relate to each other? What insights can be obtained by considering diverse entropy models within a unified framework? What is the psychological plausibility of different entropy models? What new questions and insights for research on human information acquisition follow? Our work provides several new pathways for theoretical and empirical research, reconciling apparently conflicting approaches and empirical findings within a comprehensive and unified information‐theoretic formalism.  相似文献   

17.
Some quantitative properties of the postreinforcement pause under fixed-interval schedules were simulated by a computer model embodying two processes, either of which could initiate responding in an interval. The first was a scalar timing system similar to that hypothesized to underlie behaviour on other tasks. The second was a process that initiated responding without regard to elapsed time in the interval. The model produced simulated pauses with a mean that varied as a power function of the interval value, and a standard deviation that appeared to grow as a linear function of the mean. Both these features were found in real data. The model also predicted several other features of pausing and responding under fixed-interval schedules and was also consistent with the results produced under some temporal differentiation contingencies. The model thus illustrated that behaviour that conformed to the power law could nevertheless be reconciled with scalar timing theory, if an additional non-timing process could also initiate responding.  相似文献   

18.
The nonlinear random coefficient model has become increasingly popular as a method for describing individual differences in longitudinal research. Although promising, the nonlinear model it is not utilized as often as it might be because software options are still somewhat limited. In this article we show that a specialized version of the model can be fit to data using SEM software. The specialization is to a model in which the parameters that enter the function in a linear manner are random, whereas those that are nonlinear are common to all individuals. Although this kind of function is not as general as is the fully nonlinear model, it still is applicable to many different data sets. Two examples are presented to show how the models can be estimated using popular SEM computer programs.  相似文献   

19.
A computer program that collects suicide risk factors by computer interview from persons with thoughts of suicide and processes the data to provide risk predictions has been written and pilot tested. Patient acceptance of the interviewing technique was good; more than half of the patients interviewed preferred the computer to a doctor as an interviewer. Bayes Theorem is used to process the data collected against a subjective data base. In a retrospective study comparing risk predictions made by the computer with predictions by experienced clinicians, the computer was more accurate in predicting suicide attempters (p .01) and slightly less accurate in predicting nonattempters. The program is economical, can be used wherever a telephone and a computer terminal are available, and is readily and uniformly modified to include new data.  相似文献   

20.
Syllogistic inference   总被引:4,自引:0,他引:4  
This paper reviews current psychological theories of syllogistic inference and establishes that despite their various merits they all contain deficiencies as theories of performance. It presents the results of two experiments, one using syllogisms and the other using three-term series problems, designed to elucidate how the arrangement of terms within the premises (the ‘figure’ of the premises) affects performance. These data are used in the construction of a theory based on the hypothesis that reasoners construct mental models of the premises, formulate informative conclusions about the relations in the model, and search for alternative models that are counterexamples to these conclusions. This theory, which has been implemented in several computer programs, predicts that two principal factors should affect performance: the figure of the premises, and the number of models that they call for. These predictions were confirmed by a third experiment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号