首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Scientific LogAnalyzer is a platform-independent interactive Web service for the analysis of log files. Scientific LogAnalyzer offers several features not available in other log file analysis tools--for example, organizational criteria and computational algorithms suited to aid behavioral and social scientists. Scientific LogAnalyzer is highly flexible on the input side (unlimited types of log file formats), while strictly keeping a scientific output format. Features include (1) free definition of log file format, (2) searching and marking dependent on any combination of strings (necessary for identifying conditions in experiment data), (3) computation of response times, (4) detection of multiple sessions, (5) speedy analysis of large log files, (6) output in HTML and/or tab-delimited form, suitable for import into statistics software, and (7) a module for analyzing and visualizing drop-out. Several methodological features specifically needed in the analysis of data collected in Internet-based experiments have been implemented in the Web-based tool and are described in this article. A regression analysis with data from 44 log file analyses shows that the size of the log file and the domain name lookup are the two main factors determining the duration of an analysis. It is less than a minute for a standard experimental study with a 2 x 2 design, a dozen Web pages, and 48 participants (ca. 800 lines, including data from drop-outs). The current version of Scientific LogAnalyzer is freely available for small log files. Its Web address is http://genpsylab-logcrunsh.unizh.ch/.  相似文献   

2.
Computer-based studies usually produce log files as raw data. These data cannot be analyzed adequately with conventional statistical software. The Chemnitz LogAnalyzer provides tools for quick and comfortable visualization and analyses of hypertext navigation behavior by individual users and for aggregated data. In addition, it supports analogous analyses of questionnaire data and reanalysis with respect to several predefined orders of nodes of the same hypertext. As an illustration of how to use the Chemnitz LogAnalyzer, we give an account of one study on learning with hypertext. Participants either searched for specific details or read a hypertext document to familiarize themselves with its content. The tool helped identify navigation strategies affected by these two processing goals and provided comparisons, for example, of processing times and visited sites. Altogether, the Chemnitz LogAnalyzer fills the gap between log files as raw data of Web-based studies and conventional statistical software.  相似文献   

3.
The Masked Priming Toolbox is an open-source collection of MATLAB functions that utilizes the free third-party PsychToolbox-3 (PTB3: Brainard, Spatial Vision, 10, 433-436, 1997; Kleiner, Brainard & Pelli, Perception, 36, 2007; Pelli, Spatial Vision, 10, 437-442, 1997). It is designed to allow a researcher to run masked (and unmasked) priming experiments using a variety of response devices (including keyboards, graphics tablets and force transducers). Very little knowledge of MATLAB is required; experiments are generated by creating a text file with the required parameters, and raw and analyzed data are output to Excel (as well as MATLAB) files for further analysis. The toolbox implements a variety of stimuli for use as primes and targets, as well as a variety of masks. Timing, size, location, and orientation of stimuli are all parameterizable. The code is open-source and made available on the Web under a Creative Commons License.  相似文献   

4.
5.
A computerized perceptual laboratory operates a variety of stimulus devices, including static and dynamic graphics displays, and records both discrete and analog data in a diverse and complex set of experimental paradigms. The software requirement to accommodate this variety of paradigms and high input and output data rates has been met by a single multitasked acquisition program that interprets unitary event commands to display graphics or nongraphics stimuli, record responses, control timing, or provide appropriate feedback. Event lists are created and modified with a text editor, then assembled into binary experiment definition files by dialogue with a parsing program. Graphics stimuli are referenced by a file name that contains stimulus attributes for later data extraction. All protocols and stimulus files are loaded prior to the block of trials, so no disk accesses that would delay events are required during a block. The resulting data file contains a record of all variable stimulus and timing information, as well as the discrete and analog responses, in a uniform format which facilitates data extraction.  相似文献   

6.
A data set is described that includes eight variables gathered for 13 common superordinate natural language categories and a representative set of 338 exemplars in Dutch. The category set contains 6 animal categories (reptiles, amphibians, mammals, birds, fish, andinsects), 3 artifact categories (musical instruments, tools, andvehicles), 2 borderline artifact-natural-kind categories (vegetables andfruit), and 2 activity categories (sports andprofessions). In an exemplar and a feature generation task for the category nouns, frequency data were collected. For each of the 13 categories, a representative sample of 5–30 exemplars was selected. For all exemplars, feature generation frequencies, typicality ratings, pairwise similarity ratings, age-of-acquisition ratings, word frequencies, and word associations were gathered. Reliability estimates and some additional measures are presented. The full set of these norms is available in Excel format at the Psychonomic Society Web archive,www.psychonomic. org/archive/.  相似文献   

7.
A new computer software tool for coding and analyzing verbal report data is described. Combining and extending the capabilities of earlier verbal report coding software tools, CAPAS 2.0 enables researchers to code two different types of verbal report data: (1) verbal reports already transcribed and stored in text files and (2) verbal reports in their original digitally recorded audio format. For both types of data, individual verbal report segments are presented in random order and coded independently of other segments in accordance with a localized encoding principle. Once all reports are coded, CAPAS 2.0 converts the coded reports to a formatted file suitable for analysis by statistical packages such as SPSS. R. J. Crutcher, crutcher@udayton.edu  相似文献   

8.
A new OS/8 SKED     
A new SKED run-time system and compiler have been designed for use under the OS8 operating system. OS8 is a set of programs designed by DEC for the PDP8 computer with 8K or more core memory locations and a mass-storage device (disk, or DEC-tape). The advantages of OS8 include operator convenience, device independent input-output, standard file formats, and convenient program chaining as well as a set of standard data analysis programs. The new compiler, OSCOMP, differs from the previous version in two ways. The first new feature is the ability to process named input and output files on any OS8 compatible peripheral. The second feature is the utilization of 8K of core, permitting compilation of longer state tables than could be processed with the earlier version. Furthermore, with a disk as the OS8 peripheral, the compilation process is essentially instantaneous, for state tables previously requiring from 3–30 min with paper tape devices. The new run-time system, OSRTS8, contains a variety of new features. The most important improvements are the abilities to record data on the OS8 peripheral as well as to read state tables stored as files on the mass-storage device. Other new features include chaining of state tables, automatic start, automatic output file specification, and capability for as many as 12 simultaneous stations.  相似文献   

9.
In the present study, we investigated three factors that were assumed to have a significant influence on the success of learning from multiple hypertexts, and on the construction of a documents model in particular. These factors were task (argumentative vs. narrative), available text material (with vs. without primary sources), and presentation format (active vs. static). The study was conducted with the help of the combination of three tools (DEWEX, Chemnitz LogAnalyzer, and SummTool) developed for Web-based experimenting. The results show that the task is the most important factor for successful learning from multiple hypertexts. Depending on the task, the participants were either able or unable to apply adequate strategies, such as considering the source information. It was also observed that argumentative tasks were supported by an active hypertext presentation format, whereas performance on narrative tasks increased with a passive presentation format. No effect was shown for the type of texts available.  相似文献   

10.
A simple laboratory computer system based on a Digital Equipment Corporation LSI-11, floppy disk, DRV11 parallel input-output board, and the RT-11 operating system is described. Interface to experimental devices is provided through a lab-built relay driver and relay closure sensing interface. An extensive high-level software package provides an easy-to-use control language (e.g., stimuli can be controlled with a simple “TURN ON” or “TURN OFF” instruction) and easy-to-use FORTRAN subroutines for data exploration (e.g., “IFIND” searches a data file for a particular event). The control software automatically generates, codes, and stores a complete log of every input and output event and its time of occurrence in each of five simultaneously running experiments. This provides the capability to reanalyze data in light of hypotheses not available when the experiment was designed. The FORTRAN subroutine library for data exploration provides a conditional and iterative search facility to sift out events or sets of events from the data file for analysis. Standard FORTRAN statements perform arithmetic operations on the resulting data.  相似文献   

11.
We present a method for estimating parameters of connectionist models that allows the model’s output to fit as closely as possible to empirical data. The method minimizes a cost function that measures the difference between statistics computed from the model’s output and statistics computed from the subjects’ performance. An optimization algorithm finds the values of the parameters that minimize the value of this cost function. The cost function also indicates whether the model’s statistics are significantly different from the data’s. In some cases, the method can find the optimal parameters automatically. In others, the method may facilitate the manual search for optimal parameters. The method has been implemented in Matlab, is fully documented, and is available for free download from the Psychonomic Society Web archive atwww.psychonomic.org/archive/.  相似文献   

12.
We have developed a new software application, Eye-gaze Language Integration Analysis (ELIA), which allows for the rapid integration of gaze data with spoken language input (either live or prerecorded). Specifically, ELIA integrates E-Prime output and/or .csv files that include eye-gaze and real-time language information. The process of combining eye movements with real-time speech often involves multiple error-prone steps (e.g., cleaning, transposing, graphing) before a simple time course analysis plot can be viewed or before data can be imported into a statistical package. Some of the advantages of this freely available software include (1) reducing the amount of time spent preparing raw eye-tracking data for analysis; (2) allowing for the quick analysis of pilot data in order to identify issues with experimental design; (3) facilitating the separation of trial types, which allows for the examination of supplementary effects (e.g., order or gender effects); and (4) producing standard output files (i.e., .csv files) that can be read by numerous spreadsheet packages and transferred to any statistical software.  相似文献   

13.
On the basis of calculations using the latest lexical database produced by Amano and Kondo (2000), the fourth edition of a Web-accessible database of characteristics of the 1,945 basic Japanese kanji was produced by including the mathematical concepts ofentropy, redundancy, andsymmetry and by replacing selected indexes found in previous editions (Tamaoka, Kirsner, Yanase, Miyaoka, & Kawakami, 2002). The kanji database in the fourth edition introduces seven new figures for kanji characteristics: (1) printed frequency, (2) lexical productivity, (3) accumulative lexical productivity, (4) symmetry for lexical productivity, (5) entropy, (6) redundancy, and (7) numbers of meanings for On-readings and Kun-readings. The file of the fourth edition of the kanji database may be downloaded from the Psychonomic Society Web archive,http://www.psychonomics.org/archive/.  相似文献   

14.
Several methods are available for analyzing different aspects of behavioral transition matrices, but a comprehensive framework for their use is lacking. We analyzed parasitoid foraging behavior in environments with different plant species compositions. The resulting complex data sets were analyzed using the following stepwise procedure. We detected abrupt changes in the event log files of parasitoids, using a maximum likelihood method. This served as a criterion for splitting the event log files into two parts. For both parts, Mantel’s test was used to detect differences between first-order transition matrices, whereas an iterative proportional fitting method was used to find behavioral flows that deviated from random transitions. In addition, hidden repetitive sequences were detected in the transition matrices on the basis of their relative timing, using Theme. We discuss the results for the example from a biological context and the comprehensive use of the different methods. We stress the importance of such a combined stepwise analysis for detecting differences in some parts of event log files.  相似文献   

15.
The dissemination of Web applications is extensive and still growing. The great penetration of Web sites raises a number of challenges for usability evaluators. Video-based analysis can be rather expensive and may provide limited results. In this article, we discuss what information can be provided by automatic tools able to process the information contained in browser logs and task models. To this end, we present a tool that can be used to compare log files of user behavior with the task model representing the actual Web site design, in order to identify where users’ interactions deviate from those envisioned by the system design.  相似文献   

16.
We have developed CowLog, which is open-source software for recording behaviors from digital video and is easy to use and modify. CowLog tracks the time code from digital video files. The program is suitable for coding any digital video, but the authors have used it in animal research. The program has two main windows: a coding window, which is a graphical user interface used for choosing video files and defining output files that also has buttons for scoring behaviors, and a video window, which displays the video used for coding. The windows can be used in separate displays. The user types the key codes for the predefined behavioral categories, and CowLog transcribes their timing from the video time code to a data file. CowLog comes with an additional feature, an R package called Animal, for elementary analyses of the data files. With the analysis package, the user can calculate the frequencies, bout durations, and total durations of the coded behaviors and produce summary plots from the data.  相似文献   

17.
DEWEX is a server-based environment for developing Web-based experiments. It provides many features for creating and running complex experimental designs on a local server. It is freeware and allows for both using default features, for which only text input is necessary, and easy configurations that can be set up by the experimenter. The tool also provides log files on the local server that can be interpreted and analyzed very easily. As an illustration of how DEWEX can be used, a recent study is presented that demonstrates the system’s most important features. This study investigated learning from multiple hypertext sources and shows the influences of task, source of information, and hypertext presentation format on the construction of mental representations of a hypertext about a historical event.  相似文献   

18.
The Observer Video-Pro is a system for collecting, managing, analyzing, and presenting observational data. It integrates The Observer software with time code and multimedia hardware components. It extends the functionality of a conventional real-time event recording program in various ways. Observational data can be collected, reviewed, and edited with synchronized display of the corresponding video images. For optimal visual feedback during coding, one can display the video image in a window on the computer screen. Video playback from either a VCR or a digital media file can be controlled by the computer, allowing software-controlled jog, shuttle, and search functions. Besides a wide range of VCRs, The Observer Video-Pro supports all major digital video file formats. The software allows the user to summarize research findings in numerical, graphical, or multimedia format. One can create a time-event plot for a quick glance at the temporal structure of the observed process, or run specific analysis procedures and generate reports with statistics. An Event Summary function is available for exploratory and qualitative analysis. Video material can be summarized in a Video Play List, which allows on-screen summary presentations or the creation of highlight compilations on tape, CD, or other media. Video images can be captured and saved as disk files, for use as illustrations in documents, slides for presentations, and so forth. In this paper we describe the design and operation of the system, illustrated with a case study from research on Repetitive Strain Injury (RSI).  相似文献   

19.
Navigational behavior on the Web can be analyzed with different methods. Log file data are an important source of behavioral traces of navigation. In this paper, we first discuss existing approaches to the classification and visualization of movement sequences that are important for understanding Web navigation. We then present STRATDYN, a tool that provides meaningful quantitative and qualitative measures from server-generatedlog files, as well as easy-to-follow visualizations of navigational paths of individual users. We demonstrate the usefulness of this new approach by reporting the results of two studies (with 44 students in education and vocational training), which show that navigational effectiveness is positively related to the ability to concentrate and selectivelyfocus attention, as measuredby the D2 Test of Attention and the FWIT, a German version of the Stroop test. Finally, we discuss implications for further research in this area and for the continuing development of the approach presented.  相似文献   

20.
The Observer is a general-purpose software package for event recording and data analysis in behavioral research. It allows any IBM-type personal computer to serve as an event recorder. In addition, The Observer can generate dedicated event-recording programs for several types of non-IBM-compatible portable and hand-held computers and transfer files between the PC and such computers. The user specifies options through menus. The configuration can be either used directly for event recording on the PC or passed on to a program generator that creates a program to collect data on a hand-held computer. Observational data from either type of computer can be analyzed by the program. Event-recording configurations can be tailored to many different experimental designs. Keys can be designated as events, and modifiers can be used to indicate the limits of an event. The program allows grouping of events in classes and distinction between mutually exclusive versus nonexclusive events and duration events versus frequency events. Timing of events is accurate to 0.1 sec. An on-line electronic notepad permits notes to be made during an observation session. The program also includes on-line error correction. User comments as well as independent variables can be stored together with the observational data. During data analysis, the user can select the level of analysis and the type of output file. The Observer calculates frequency of occurrence and duration for classes of events, individual events, or combinations of events. For analysis of concurrence, one can select the number of nesting levels and the order of nesting. Output can be generated in the form of sorted event sequence files, text report files, and tabular ASCII files. The results can be exported to spreadsheet and statistical programs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号