首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Scientific LogAnalyzer is a platform-independent interactive Web service for the analysis of log files. Scientific LogAnalyzer offers several features not available in other log file analysis tools—for example, organizational criteria and computational algorithms suited to aid behavioral and social scientists. Scientific LogAnalyzer is highly flexible on the input side (unlimited types of log file formats), while strictly keeping a scientific output format. Features include (1) free definition of log file format, (2) searching and marking dependent on any combination of strings (necessary for identifying conditions in experiment data), (3) computation of response times, (4) detection of multiple sessions, (5) speedy analysis of large log files, (6) output in HTML and/or tab-delimited form, suitable for import into statistics software, and (7) a module for analyzing and visualizing drop-out. Several methodological features specifically needed in the analysis of data collected in Internet-based experiments have been implemented in the Web-based tool and are described in this article. A regression analysis with data from 44 log file analyses shows that the size of the log file and the domain name lookup are the two main factors determining the duration of an analysis. It is less than a minute for a standard experimental study with a 2 × 2 design, a dozen Web pages, and 48 participants (ca. 800 lines, including data from drop-outs). The current version of Scientific LogAnalyzer is freely available for small log files. Its Web address ishttp://genpsylab-logcrunsh.unizh.ch/.  相似文献   

2.
Scientific LogAnalyzer is a platform-independent interactive Web service for the analysis of log files. Scientific LogAnalyzer offers several features not available in other log file analysis tools--for example, organizational criteria and computational algorithms suited to aid behavioral and social scientists. Scientific LogAnalyzer is highly flexible on the input side (unlimited types of log file formats), while strictly keeping a scientific output format. Features include (1) free definition of log file format, (2) searching and marking dependent on any combination of strings (necessary for identifying conditions in experiment data), (3) computation of response times, (4) detection of multiple sessions, (5) speedy analysis of large log files, (6) output in HTML and/or tab-delimited form, suitable for import into statistics software, and (7) a module for analyzing and visualizing drop-out. Several methodological features specifically needed in the analysis of data collected in Internet-based experiments have been implemented in the Web-based tool and are described in this article. A regression analysis with data from 44 log file analyses shows that the size of the log file and the domain name lookup are the two main factors determining the duration of an analysis. It is less than a minute for a standard experimental study with a 2 x 2 design, a dozen Web pages, and 48 participants (ca. 800 lines, including data from drop-outs). The current version of Scientific LogAnalyzer is freely available for small log files. Its Web address is http://genpsylab-logcrunsh.unizh.ch/.  相似文献   

3.
In the present study, we investigated three factors that were assumed to have a significant influence on the success of learning from multiple hypertexts, and on the construction of a documents model in particular. These factors were task (argumentative vs. narrative), available text material (with vs. without primary sources), and presentation format (active vs. static). The study was conducted with the help of the combination of three tools (DEWEX, Chemnitz LogAnalyzer, and SummTool) developed for Web-based experimenting. The results show that the task is the most important factor for successful learning from multiple hypertexts. Depending on the task, the participants were either able or unable to apply adequate strategies, such as considering the source information. It was also observed that argumentative tasks were supported by an active hypertext presentation format, whereas performance on narrative tasks increased with a passive presentation format. No effect was shown for the type of texts available.  相似文献   

4.
DEWEX is a server-based environment for developing Web-based experiments. It provides many features for creating and running complex experimental designs on a local server. It is freeware and allows for both using default features, for which only text input is necessary, and easy configurations that can be set up by the experimenter. The tool also provides log files on the local server that can be interpreted and analyzed very easily. As an illustration of how DEWEX can be used, a recent study is presented that demonstrates the system’s most important features. This study investigated learning from multiple hypertext sources and shows the influences of task, source of information, and hypertext presentation format on the construction of mental representations of a hypertext about a historical event.  相似文献   

5.
We have developed a new software application, Eye-gaze Language Integration Analysis (ELIA), which allows for the rapid integration of gaze data with spoken language input (either live or prerecorded). Specifically, ELIA integrates E-Prime output and/or .csv files that include eye-gaze and real-time language information. The process of combining eye movements with real-time speech often involves multiple error-prone steps (e.g., cleaning, transposing, graphing) before a simple time course analysis plot can be viewed or before data can be imported into a statistical package. Some of the advantages of this freely available software include (1) reducing the amount of time spent preparing raw eye-tracking data for analysis; (2) allowing for the quick analysis of pilot data in order to identify issues with experimental design; (3) facilitating the separation of trial types, which allows for the examination of supplementary effects (e.g., order or gender effects); and (4) producing standard output files (i.e., .csv files) that can be read by numerous spreadsheet packages and transferred to any statistical software.  相似文献   

6.
Psychologists are often faced with the need to manipulate one or more files, either to modify the format or to extract specific information. Traditionally, these manipulations have been performed using programming languages or statistical software, but such solutions are often expensive, platform dependent, or limited in their ability to handle both numerical and textual data. This tutorial introduces the perl programming language, a free, platform-independent language that excels at pattern matching and text processing but that is also numerically capable. A running example illustrates an application of perl to psychological data.  相似文献   

7.
Many experiment-running programs generate output files that require selection, reduction, and formatting of the raw data before the numbers are suitable for input into statistical packages. PsySquash is a Macintosh program for the selection, organization, and summary of the tabular data that are produced by a widely used freeware data acquisition system, PsyScope. PsySquash serves as a bridge between PsyScope’s output data format and the input formats required by common statistical packages such as SAS, SPSS, and SuperAnova. An extension of PsySquash is proposed for use with arbitrary tabular data.  相似文献   

8.
This paper describes SHAPA Version 2.01, an interactive program for performing verbal protocol analysis. Verbal protocol analysis is a time-consuming activity that has hitherto typically been done by hand, whereas SHAPA represents an attempt to build a software environment to aid (but not replace) researchers in this activity. SHAPA allows researchers to develop an encoding vocabulary, to apply it to raw verbal protocol files, and to perform various types of data aggregation and data reduction. When performing verbal protocol analysis, researchers often try out different possible coding schemes before settling on the most effective one. SHAPA has been designed to support quick alterations to an encoding vocabulary and to support the rigorous statistical analysis of content and patterns (sequential data analysis) in protocol data. It is intended as an exploratory, as well as analytical, tool and has been designed to be easy for novices to learn and use, yet fluid and powerful for experts. A prototype version of SHAPA has already been used by a sample of researchers, and their experiences and requests have guided the programming of the present much more powerful version.  相似文献   

9.
张智君  任衍具  宿芳 《心理学报》2004,36(5):534-539
通过两个实验考察了结构、任务类型和导航对超文本信息搜索绩效的影响。实验一采用2(层次结构,混合结构)×2(特定任务,关系任务)的被试内设计,探讨了超文本结构和任务类型对信息搜索的影响;实验二在实验一的基础上,采用2(层次结构,混合结构)×2(有导航图,无导航图)的被试内设计,考察超文本结构和导航对关系任务信息搜索的影响。结果表明:(1)超文本结构和任务类型对信息搜索绩效有显著的交互影响,就关系任务而言,混合结构超文本优于层次结构超文本,但就特定任务而言,两者无显著差异;(2)导航对信息搜索行为有指导作用,尤其对层次结构超文本有利;(3)两种主观指标的结果与客观指标存在一定程度的一致性。  相似文献   

10.
This article presents GazeAlyze, a software package, written as a MATLAB (MathWorks Inc., Natick, MA) toolbox developed for the analysis of eye movement data. GazeAlyze was developed for the batch processing of multiple data files and was designed as a framework with extendable modules. GazeAlyze encompasses the main functions of the entire processing queue of eye movement data to static visual stimuli. This includes detecting and filtering artifacts, detecting events, generating regions of interest, generating spread sheets for further statistical analysis, and providing methods for the visualization of results, such as path plots and fixation heat maps. All functions can be controlled through graphical user interfaces. GazeAlyze includes functions for correcting eye movement data for the displacement of the head relative to the camera after calibration in fixed head mounts. The preprocessing and event detection methods in GazeAlyze are based on the software ILAB 3.6.8 Gitelman (Behav Res Methods Instrum Comput 34(4), 605-612, 2002). GazeAlyze is distributed free of charge under the terms of the GNU public license and allows code modifications to be made so that the program's performance can be adjusted according to a user's scientific requirements.  相似文献   

11.
Navigational behavior on the Web can be analyzed with different methods. Log file data are an important source of behavioral traces of navigation. In this paper, we first discuss existing approaches to the classification and visualization of movement sequences that are important for understanding Web navigation. We then present STRATDYN, a tool that provides meaningful quantitative and qualitative measures from server-generatedlog files, as well as easy-to-follow visualizations of navigational paths of individual users. We demonstrate the usefulness of this new approach by reporting the results of two studies (with 44 students in education and vocational training), which show that navigational effectiveness is positively related to the ability to concentrate and selectivelyfocus attention, as measuredby the D2 Test of Attention and the FWIT, a German version of the Stroop test. Finally, we discuss implications for further research in this area and for the continuing development of the approach presented.  相似文献   

12.
This paper briefly overviews the World-Wide Web. It then provides a short tutorial on the use of the hypertext markup language to publish information on the web. Hypertext markup language is a special page-layout language that was developed to help make creating and retrieving information on the web consistent and efficient. Hyperlinks within hypertext markup language make the language especially powerful because they enable the browser to transparently retrieve images, movies, or audio files from virtually any computer on earth simply by clicking on an item displayed on the monitor.  相似文献   

13.
The Observer is a general-purpose software package for event recording and data analysis in behavioral research. It allows any IBM-type personal computer to serve as an event recorder. In addition, The Observer can generate dedicated event-recording programs for several types of non-IBM-compatible portable and hand-held computers and transfer files between the PC and such computers. The user specifies options through menus. The configuration can be either used directly for event recording on the PC or passed on to a program generator that creates a program to collect data on a hand-held computer. Observational data from either type of computer can be analyzed by the program. Event-recording configurations can be tailored to many different experimental designs. Keys can be designated as events, and modifiers can be used to indicate the limits of an event. The program allows grouping of events in classes and distinction between mutually exclusive versus nonexclusive events and duration events versus frequency events. Timing of events is accurate to 0.1 sec. An on-line electronic notepad permits notes to be made during an observation session. The program also includes on-line error correction. User comments as well as independent variables can be stored together with the observational data. During data analysis, the user can select the level of analysis and the type of output file. The Observer calculates frequency of occurrence and duration for classes of events, individual events, or combinations of events. For analysis of concurrence, one can select the number of nesting levels and the order of nesting. Output can be generated in the form of sorted event sequence files, text report files, and tabular ASCII files. The results can be exported to spreadsheet and statistical programs.  相似文献   

14.
We have developed CowLog, which is open-source software for recording behaviors from digital video and is easy to use and modify. CowLog tracks the time code from digital video files. The program is suitable for coding any digital video, but the authors have used it in animal research. The program has two main windows: a coding window, which is a graphical user interface used for choosing video files and defining output files that also has buttons for scoring behaviors, and a video window, which displays the video used for coding. The windows can be used in separate displays. The user types the key codes for the predefined behavioral categories, and CowLog transcribes their timing from the video time code to a data file. CowLog comes with an additional feature, an R package called Animal, for elementary analyses of the data files. With the analysis package, the user can calculate the frequencies, bout durations, and total durations of the coded behaviors and produce summary plots from the data.  相似文献   

15.
Reaction time (RT) data afford different types of analyses. One type of analysis, called “curve analysis,” can be used to characterize the evolution of performance at different moments over the course of learning. By contrast, distribution analysis aims at characterizing the spread of RTs at a specific moment. Techniques to deduce free parameters are described for both types of analyses, given an a priori choice of the curve or distribution one wants to fit, along with statistical tests of significance for distribution analysis: The log likelihood technique is used if the probability density function is given; otherwise, a root-mean-square-deviation minimization technique is used. A program—PASTIS—that searches for the optimal parameters of the following curves is presented: power law, exponential, and e-based exponential. PASTIS also searches for Weibull and the ex-Gaussian distributions. Some tests of the software are presented.  相似文献   

16.

Aperiodic materials, such as two-dimensional quasicrystalline structures, show characteristic elements in transmission electron microscopy images that are closely related to structure elements in the corresponding material. Although the arrangement of these structure elements fundamentally influences the properties of the two-dimensional quasicrystalline structures, it cannot be determined in a satisfactory way by conventional diffraction-based methods. We have developed an automated procedure for analysing images obtained by high-resolution transmission electron microscopy. The method, which is based on image processing, determines the two-dimensional arrangement of the characteristic features and enables subsequent statistical data analysis. It is illustrated with new tiling analyses of highly perfect decagonal Al-Co-Ni quasicrystals.  相似文献   

17.
Several methods are available for analyzing different aspects of behavioral transition matrices, but a comprehensive framework for their use is lacking. We analyzed parasitoid foraging behavior in environments with different plant species compositions. The resulting complex data sets were analyzed using the following stepwise procedure. We detected abrupt changes in the event log files of parasitoids, using a maximum likelihood method. This served as a criterion for splitting the event log files into two parts. For both parts, Mantel’s test was used to detect differences between first-order transition matrices, whereas an iterative proportional fitting method was used to find behavioral flows that deviated from random transitions. In addition, hidden repetitive sequences were detected in the transition matrices on the basis of their relative timing, using Theme. We discuss the results for the example from a biological context and the comprehensive use of the different methods. We stress the importance of such a combined stepwise analysis for detecting differences in some parts of event log files.  相似文献   

18.
A new computer software tool for coding and analyzing verbal report data is described. Combining and extending the capabilities of earlier verbal report coding software tools, CAPAS 2.0 enables researchers to code two different types of verbal report data: (1) verbal reports already transcribed and stored in text files and (2) verbal reports in their original digitally recorded audio format. For both types of data, individual verbal report segments are presented in random order and coded independently of other segments in accordance with a localized encoding principle. Once all reports are coded, CAPAS 2.0 converts the coded reports to a formatted file suitable for analysis by statistical packages such as SPSS. R. J. Crutcher, crutcher@udayton.edu  相似文献   

19.
Behavioral researchers have employed hypermedia-based software applications in their experiments for some time. More recently, interest in the World-Wide Web has developed among researchers in the social sciences, and popular use of this new medium continues to grow at an incredible rate. This paper describes Listener, a tool developed to log users’ hypermedia and World-Wide Web navigation behavior using Apple Macintosh computers in a laboratory setting. Listener is able to capture navigation actions through cached documents, overcoming some of the problems associated with analyzing standard web server logs.  相似文献   

20.
RTSYS is a menu-driven DOS application for the manipulation, analysis, and graphical display of reaction time data. It can be used either in a single-task environment under DOS, with access to a set operating system commands, or as an application under Windows. All functions have context-sensitive help. RTSYS fits the ex-Gaussian distribution to reaction time data without the difficulties usually associated with numerical parameter estimation. Distribution fitting and flexible censoring and rescaling options allow RTSYS to address the problems of reaction time distribution skew and outlying responses with reasonable sample sizes. RTSYS can automatically process multiple input files from experiments with arbitrary designs and produce formatted output of statistics for further processing by graphical and inferential statistical packages. The present article reviews and explains techniques used by RTSYS and provides an overview of the operation of the program.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号