首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The software described in this paper, VideoNoter, addresses the need for tools that support annotation and retrieval of video data and organize the presentation of multiple analyses of the same data. Video is widely perceived as an important medium for psychological research, because video recording makes the fleeting particulars of human interaction available as data for detailed analysis, while retaining much of the context of the event. Though the benefits of using video data are high, the process can be prohibitively time-consuming. We have developed a prototype computer-based video analysis tool that can enhance the productivity of the video analysis process. In this paper, we report on the design and implementation of VideoNoter, and we discuss how it facilitates video data analysis.  相似文献   

2.
The complexity of psychological science often requires the collection and analysis of multidimensional data. Such data bring about a corresponding cognitive load that has led scientists to develop techniques of scientific visualization to ease the burden. This paper provides an introduction to scientific visualization techniques, a framework for understanding those techniques, and an assessment of the suitability of this approach for psychology. The framework employed builds on the notion of balancingnoise andsmooth in statistical analysis.  相似文献   

3.
This paper considers the general problem of analyzing data for job similarities/differences. Cluster analysis and univariate analysis of variance, which are recent suggestions for attacking this problem, are briefly reviewed. The suggestion made in this paper is to use multivariate analysis of variance, accompanied by a multivariate extension of the well known proportion of variance index, ω2. Discriminant analysis and related techniques are suggested to provide information regarding specific hypotheses. The potential users are provided with the references to well known computer packages that allow all the analyses to be performed easily, rapidly, and accurately on their own data. Appropriate interpretations of each result are also indicated, and illustrated with an example.  相似文献   

4.
Several investigators have fit psychometric functions to data from adaptive procedures for threshold estimation. Although the threshold estimates are in general quite correct, one encounters a slope bias that has not been explained up to now. The present paper demonstrates slope bias for parametric and nonparametric maximum-likelihood fits and for Spearman-Kärber analysis of adaptive data. The examples include staircase and stochastic approximation procedures. The paper then presents an explanation of slope bias based on serial data dependency in adaptive procedures. Data dependency is first illustrated with simple two-trial examples and then extended to realistic adaptive procedures. Finally, the paper presents an adaptive staircase procedure designed to measure threshold and slope directly. In contrast to classical adaptive threshold-only procedures, this procedure varies both a threshold and a spread parameter in response to double trials.  相似文献   

5.
Several investigators have fit psychometric functions to data from adaptive procedures for threshold estimation. Although the threshold estimates are in general quite correct, one encounters a slope bias that has not been explained up to now. The present paper demonstrates slope bias for parametric and nonparametric maximum-likelihood fits and for Spearman-K?rber analysis of adaptive data. The examples include staircase and stochastic approximation procedures. The paper then presents an explanation of slope bias based on serial data dependency in adaptive procedures. Data dependency is first illustrated with simple two-trial examples and then extended to realistic adaptive procedures. Finally, the paper presents an adaptive staircase procedure designed to measure threshold and slope directly. In contrast to classical adaptive threshold-only procedures, this procedure varies both a threshold and a spread parameter in response to double trials.  相似文献   

6.
Visual inspection of data is a common method for understanding, responding to, and communicating important behavior-environment relations in single-subject research. In a field that was once dominated by cumulative, moment-to-moment records of behavior, a number of graphic forms currently exist that aggregate data into larger units. In this paper, we describe the continuum of aggregation that ranges from distant to intimate displays of behavioral data. To aid in an understanding of the conditions under which a more intimate analysis is warranted (i.e., one that provides a richer analysis than that provided by condition or session aggregates), we review a sample of research articles for which within-session data depiction has enhanced the visual analysis of applied behavioral research.  相似文献   

7.
AN EMPIRICAL ASSESSMENT OF DATA COLLECTION USING THE INTERNET   总被引:13,自引:0,他引:13  
Identical questionnaire items were used to gather data from 2 samples of employees. One sample ( n = 50) responded to a survey implemented on the World Wide Web. Another sample ( n = 181) filled out a paper version of the survey. Analyses of the 2 data sets supported an exploration of the viability of World Wide Web data collection. The World Wide Web data had fewer missing values than the paper and pencil data. A covariance analysis simultaneously conducted in both samples indicated similar covariance structures among the tested variables. The costs and benefits of using access controls to improve sampling are discussed. Four applications that do not require such access controls are discussed.  相似文献   

8.
Multiple-set canonical correlation analysis (Generalized CANO or GCANO for short) is an important technique because it subsumes a number of interesting multivariate data analysis techniques as special cases. More recently, it has also been recognized as an important technique for integrating information from multiple sources. In this paper, we present a simple regularization technique for GCANO and demonstrate its usefulness. Regularization is deemed important as a way of supplementing insufficient data by prior knowledge, and/or of incorporating certain desirable properties in the estimates of parameters in the model. Implications of regularized GCANO for multiple correspondence analysis are also discussed. Examples are given to illustrate the use of the proposed technique. The work reported in this paper is supported by Grants 10630 and 290439 from the Natural Sciences and Engineering Research Council of Canada to the first and the second authors, respectively. The authors would like to thank the two editors (old and new), the associate editor, and four anonymous reviewers for their insightful comments on earlier versions of this paper. Matlab programs that carried out the computations reported in the paper are available upon request.  相似文献   

9.
An extension of multiple correspondence analysis is proposed that takes into account cluster-level heterogeneity in respondents’ preferences/choices. The method involves combining multiple correspondence analysis and k-means in a unified framework. The former is used for uncovering a low-dimensional space of multivariate categorical variables while the latter is used for identifying relatively homogeneous clusters of respondents. The proposed method offers an integrated graphical display that provides information on cluster-based structures inherent in multivariate categorical data as well as the interdependencies among the data. An empirical application is presented which demonstrates the usefulness of the proposed method and how it compares to several extant approaches. The work reported in this paper was supported by Grant 290439 and Grant A6394 from the Natural Sciences and Engineering Research Council of Canada to the first and third authors, respectively. We wish to thank Ulf B?ckenholt, Paul Green, and Marc Tomiuk for their insightful comments on an earlier version of this paper. We also wish to thank Byunghwa Yang for generously providing us with his data.  相似文献   

10.
This paper presents an overview of an approach to the quantitative analysis of qualitative data with theoretical and methodological explanations of the two cornerstones of the approach, Alternating Least Squares and Optimal Scaling. Using these two principles, my colleagues and I have extended a variety of analysis procedures originally proposed for quantitative (interval or ratio) data to qualitative (nominal or ordinal) data, including additivity analysis and analysis of variance; multiple and canonical regression; principal components; common factor and three mode factor analysis; and multidimensional scaling. The approach has two advantages: (a) If a least squares procedure is known for analyzing quantitative data, it can be extended to qualitative data; and (b) the resulting algorithm will be convergent. Three completely worked through examples of the additivity analysis procedure and the steps involved in the regression procedures are presented.Presented as the Presidential Address to the Psychometric Society's Annual meeting, May, 1981. I wish to express my deep appreciation to Jan de Leeuw and Yoshio Takane. Our team effort was essential for the developments reported in this paper. Without this effort the present paper would not exist. Portions of this paper appear in Lantermann, E. D. & Feger, H. (Eds.)Similarity and Choice, Hans Huber, Vienna, 1980. The present paper benefits greatly from a set of detailed comments made by Joseph Kruskal on the earlier paper.  相似文献   

11.
This paper is a multi‐layered account that begins with an overview of narrative inquiry and narrative analysis methodologies, and then leads into an examination of how the process of editing a book can represent an example of narrative research. The author describes how she came to create two books that provide an insider's view of counsellors' role development within the fields of health and rehabilitation. Taking the task of editor a step further, she gathered additional reflexive data from chapter authors once the books were published: these data provide insight into the challenges created by undertaking a project of this kind, in particular the adoption of a reflexive voice. The paper concludes with a discussion of how both projects meet the criteria for narrative inquiry.  相似文献   

12.
This paper presents a cognitive analysis of subjective probability judgments and proposes that these are assessments of belief-processing activities. The analysis is motivated by an investigation of the concepts of belief, knowledge, and uncertainty. Judgment and reasoning are differentiated, Toulmin's (1958) theory of argument being used to explicate the latter. The paper discusses a belief-processing model in which reasoning is used to translate data into conclusions, while judgmental processes qualify those conclusions with degrees of belief. The model sheds light on traditional interpretations of probability and suggests that different characteristics of belief—likelihood and support—are addressed by different representational systems. In concluding, the paper identifies new lines of research implied by its analysis.  相似文献   

13.
14.
In many areas of science, research questions imply the analysis of a set of coupled data blocks, with, for instance, each block being an experimental unit by variable matrix, and the variables being the same in all matrices. To obtain an overall picture of the mechanisms that play a role in the different data matrices, the information in these matrices needs to be integrated. This may be achieved by applying a data‐analytic strategy in which a global model is fitted to all data matrices simultaneously, as in some forms of simultaneous component analysis (SCA). Since such a strategy implies that all data entries, regardless the matrix they belong to, contribute equally to the analysis, it may obfuscate the overall picture of the mechanisms underlying the data when the different data matrices are subject to different amounts of noise. One way out is to downweight entries from noisy data matrices in favour of entries from less noisy matrices. Information regarding the amount of noise that is present in each matrix, however, is, in most cases, not available. To deal with these problems, in this paper a novel maximum‐likelihood‐based simultaneous component analysis method, referred to as MxLSCA, is proposed. Being a stochastic extension of SCA, in MxLSCA the amount of noise in each data matrix is estimated and entries from noisy data matrices are downweighted. Both in an extensive simulation study and in an application to data stemming from cross‐cultural emotion psychology, it is shown that the novel MxLSCA strategy outperforms the SCA strategy with respect to disclosing the mechanisms underlying the coupled data.  相似文献   

15.
In this paper we discuss the use of a recent dimension reduction technique called Locally Linear Embedding, introduced by Roweis and Saul, for performing an exploratory latent structure analysis. The coordinate variables from the locally linear embedding describing the manifold on which the data reside serve as the latent variable scores. We propose the use of semiparametric penalized spline methods for reconstruction of the manifold equations that approximate the data space. We also discuss a crossvalidation strategy that can guide in selecting an appropriate number of latent variables. Synthetic as well as real data sets are used to illustrate the proposed approach. A nonlinear latent structure representation of a data set also serves as a data visualization tool.  相似文献   

16.
Exploratory Bi-Factor Analysis   总被引:1,自引:0,他引:1  
Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger. The bi-factor model has a general factor and a number of group factors. The purpose of this paper is to introduce an exploratory form of bi-factor analysis. An advantage of using exploratory bi-factor analysis is that one need not provide a specific bi-factor model a priori. The result of an exploratory bi-factor analysis, however, can be used as an aid in defining a specific bi-factor model. Our exploratory bi-factor analysis is simply exploratory factor analysis using a bi-factor rotation criterion. This is a criterion designed to produce perfect cluster structure in all but the first column of a rotated loading matrix. Examples are given to show how exploratory bi-factor analysis can be used with ideal and real data. The relation of exploratory bi-factor analysis to the Schmid-Leiman method is discussed.  相似文献   

17.
The CQUniversity Australia Human Research Ethics Committee (HREC) is a human ethics research committee registered under the auspices of the National Health and Medical Research Council. In 2009 an external review of CQUniversity Australia’s HREC policies and procedures recommended that a low risk research process be available to the institution’s researchers. Subsequently, in 2010 the Human Research Ethics Committee Low Risk Application Procedure came into operation. This paper examines the applications made under the Human Research Ethics Committee Low Risk Application Procedure during the course of 2010 and 2011. The paper contributes to the literature analyzing the decision-making processes of research review committees through an analysis of the quantitative data relating to the low risk research applications made and through discourse analysis of the qualitative data represented by the assessment comments of the members of the Committee.  相似文献   

18.
This paper describes SHAPA Version 2.01, an interactive program for performing verbal protocol analysis. Verbal protocol analysis is a time-consuming activity that has hitherto typically been done by hand, whereas SHAPA represents an attempt to build a software environment to aid (but not replace) researchers in this activity. SHAPA allows researchers to develop an encoding vocabulary, to apply it to raw verbal protocol files, and to perform various types of data aggregation and data reduction. When performing verbal protocol analysis, researchers often try out different possible coding schemes before settling on the most effective one. SHAPA has been designed to support quick alterations to an encoding vocabulary and to support the rigorous statistical analysis of content and patterns (sequential data analysis) in protocol data. It is intended as an exploratory, as well as analytical, tool and has been designed to be easy for novices to learn and use, yet fluid and powerful for experts. A prototype version of SHAPA has already been used by a sample of researchers, and their experiences and requests have guided the programming of the present much more powerful version.  相似文献   

19.
The on-orbit application of movement analysis methodology, on-board space stations, for studying the gravity role in motor functions, requires a careful adaptation of the currently adopted techniques in order to obtain reliable data. In those operative conditions, differently from common on-ground experimental activities, a non-specialist operator, an astronaut of the space station crew, is expected to self-administer the experimental protocol, particularly self-marking specific anatomical landmarks. The present paper proposes a movement analysis methodology, which fits the specific constraints of space activity and matches the objective of maximising reliability and minimising on-orbit time, and reports normative data about accuracy and precision of the self-marking of an extended set of anatomical landmarks. The same set of landmarks has been considered also for direct-marking performed by experts in motion analysis and their results have been compared to self-marking ones. The paper contents will support the design of future space experimental campaigns and is, in general, applicable to any on-ground scientific investigation, possibly increasing data reliability.  相似文献   

20.
This paper demonstrates a method of transferring research data from a remote clinic to a large university mainframe for data manipulation and statistical analysis. Data collected by an Apple //e computer were transferred to an IBM 3031 mainframe by sending data files to an IBM PC by telephone modem or by direct hardwire connection to the PC. The IMB PC performed data-formatting routines and then uploaded the files to the mainframe for storage. Advantages and disadvantages of sending data over telephone lines via a modem are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号