首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
When participants are asked to respond in the same way to stimuli from different sources (e.g., auditory and visual), responses are often observed to be substantially faster when both stimuli are presented simultaneously (redundancy gain). Different models account for this effect, the two most important being race models and coactivation models. Redundancy gains consistent with the race model have an upper limit, however, which is given by the well-known race model inequality (Miller, 1982). A number of statistical tests have been proposed for testing the race model inequality in single participants and groups of participants. All of these tests use the race model as the null hypothesis, and rejection of the null hypothesis is considered evidence in favor of coactivation. We introduce a statistical test in which the race model prediction is the alternative hypothesis. This test controls the Type I error if a theory predicts that the race model prediction holds in a given experimental condition.  相似文献   

3.
In divided-attention tasks, responses are faster when two target stimuli are presented, and thus one is redundant, than when only a single target stimulus is presented. Raab (1962) suggested an account of this redundant-targets effect in terms of a race model in which the response to redundant target stimuli is initiated by the faster of two separate target detection processes. Such models make a prediction about the probability distributions of reaction times that is often called the race model inequality, and it is often of interest to test this prediction. In this article, we describe a precise algorithm that can be used to test the race model inequality and present MATLAB routines and a Pascal program that implement this algorithm.  相似文献   

4.
Null-hypothesis significance testing remains the standard inferential tool in cognitive science despite its serious disadvantages. Primary among these is the fact that the resulting probability value does not tell the researcher what he or she usually wants to know: How probable is a hypothesis, given the obtained data? Inspired by developments presented by Wagenmakers (Psychonomic Bulletin &; Review, 14, 779–804, 2007), I provide a tutorial on a Bayesian model selection approach that requires only a simple transformation of sum-of-squares values generated by the standard analysis of variance. This approach generates a graded level of evidence regarding which model (e.g., effect absent [null hypothesis] vs. effect present [alternative hypothesis]) is more strongly supported by the data. This method also obviates admonitions never to speak of accepting the null hypothesis. An Excel worksheet for computing the Bayesian analysis is provided as supplemental material.  相似文献   

5.
6.
An inequality by J. O. Miller (1982) has become the standard tool to test the race model for redundant signals reaction times (RTs), as an alternative to a neural summation mechanism. It stipulates that the RT distribution function to redundant stimuli is never larger than the sum of the distribution functions for 2 single stimuli. When many different experimental conditions are to be compared, a numerical index of violation is very desirable. Widespread practice is to take a certain area with contours defined by the distribution functions for single and redundant stimuli. Here this area is shown to equal the difference between 2 mean RT values. This result provides an intuitive interpretation of the index and makes it amenable to simple statistical testing. An extension of this approach to 3 redundant signals is presented.  相似文献   

7.
In speeded response tasks with redundant signals, parallel processing of the redundant signals is generally tested using the so-called race inequality. The race inequality states that the distribution of fast responses for a redundant stimulus never exceeds the summed distributions of fast responses for the single stimuli. It has been pointed out that fast guesses (e.g. anticipatory responses) interfere with this test, and a correction procedure (‘kill-the-twin’ procedure) has been suggested. In this note we formally derive this procedure and extend it to the case in which redundant stimuli are presented with onset asynchrony. We demonstrate how the kill-the-twin procedure is used in a statistical test of the race model prediction.  相似文献   

8.
9.
10.
This tutorial explains the foundation of approximate Bayesian computation (ABC), an approach to Bayesian inference that does not require the specification of a likelihood function, and hence that can be used to estimate posterior distributions of parameters for simulation-based models. We discuss briefly the philosophy of Bayesian inference and then present several algorithms for ABC. We then apply these algorithms in a number of examples. For most of these examples, the posterior distributions are known, and so we can compare the estimated posteriors derived from ABC to the true posteriors and verify that the algorithms recover the true posteriors accurately. We also consider a popular simulation-based model of recognition memory (REM) for which the true posteriors are unknown. We conclude with a number of recommendations for applying ABC methods to solve real-world problems.  相似文献   

11.
Many psychological constructs are conceived to be hierarchically structured and thus to operate at various levels of generality. Alternative confirmatory factor analytic (CFA) models can be used to study various aspects of this proposition: (a) The one-factor model focuses on the top of the hierarchy and contains only a general construct, (b) the first-order factor model focuses on the intermediate level of the hierarchy and contains only specific constructs, and both (c) the higher order factor model and (d) the nested-factor model consider the hierarchy in its entirety and contain both general and specific constructs (e.g., bifactor model). This tutorial considers these CFA models in depth, addressing their psychometric properties, interpretation of general and specific constructs, and implications for model-based score reliabilities. The authors illustrate their arguments with normative data obtained for the Wechsler Adult Intelligence Scale and conclude with recommendations on which CFA model is most appropriate for which research and diagnostic purposes.  相似文献   

12.
A key problem in statistical modeling is model selection, that is, how to choose a model at an appropriate level of complexity. This problem appears in many settings, most prominently in choosing the number of clusters in mixture models or the number of factors in factor analysis. In this tutorial, we describe Bayesian nonparametric methods, a class of methods that side-steps this issue by allowing the data to determine the complexity of the model. This tutorial is a high-level introduction to Bayesian nonparametric methods and contains several examples of their application.  相似文献   

13.
Experimentation is ubiquitous in the field of psychology and fundamental to the advancement of its science, and one of the biggest challenges for researchers is designing experiments that can conclusively discriminate the theoretical hypotheses or models under investigation. The recognition of this challenge has led to the development of sophisticated statistical methods that aid in the design of experiments and that are within the reach of everyday experimental scientists. This tutorial paper introduces the reader to an implementable experimentation methodology, dubbed Adaptive Design Optimization, that can help scientists to conduct “smart” experiments that are maximally informative and highly efficient, which in turn should accelerate scientific discovery in psychology and beyond.  相似文献   

14.
The Commodore Amiga home microcomputer, together withDeLuxePaint, a commercial software package, can generate many useful visual stimuli, including random-dot stereograms, apparent motion, texture edges, aftereffects from dimming and brightening, motion aftereffects, dynamic random noise, and drifting and counterphase gratings. Videotapes can readily be made of these displays. No programming experience is necessary.  相似文献   

15.
The abilities to learn and to categorize are fundamental for cognitive systems, be it animals or machines, and therefore have attracted attention from engineers and psychologists alike. Modern machine learning methods and psychological models of categorization are remarkably similar, partly because these two fields share a common history in artificial neural networks and reinforcement learning. However, machine learning is now an independent and mature field that has moved beyond psychologically or neurally inspired algorithms towards providing foundations for a theory of learning that is rooted in statistics and functional analysis. Much of this research is potentially interesting for psychological theories of learning and categorization but also hardly accessible for psychologists. Here, we provide a tutorial introduction to a popular class of machine learning tools, called kernel methods. These methods are closely related to perceptrons, radial-basis-function neural networks and exemplar theories of categorization. Recent theoretical advances in machine learning are closely tied to the idea that the similarity of patterns can be encapsulated in a positive definite kernel. Such a positive definite kernel can define a reproducing kernel Hilbert space which allows one to use powerful tools from functional analysis for the analysis of learning algorithms. We give basic explanations of some key concepts—the so-called kernel trick, the representer theorem and regularization—which may open up the possibility that insights from machine learning can feed back into psychology.  相似文献   

16.
A typical psychophysical experiment presents a sequence of visual stimuli to an observer and collects and stores the responses for later analysis. Although computers can speed up this process, paint programs that allow one to prepare visual stimuli without programming cannot read responses from the mouse or keyboard, whereas BASIC and other programming languages that allow one to collect and store observer’s responses unfortunately cannot handle prepainted pictures. A new programming language called The Director provides the best of both worlds. Its BASIC-like commands can manipulate prepainted pictures, read responses made with the mouse and keyboard, and save these on disk for later analysis. A dozen sample programs are provided.  相似文献   

17.
This tutorial focuses on the sense of touch within the context of a fully active human observer. It is intended for graduate students and researchers outside the discipline who seek an introduction to the rapidly evolving field of human haptics. The tutorial begins with a review of peripheral sensory receptors in skin, muscles, tendons, and joints. We then describe an extensive body of research on “what” and “where” channels, the former dealing with haptic perception of objects, surfaces, and their properties, and the latter with perception of spatial layout on the skin and in external space relative to the perceiver. We conclude with a brief discussion of other significant issues in the field, including vision-touch interactions, affective touch, neural plasticity, and applications.  相似文献   

18.
A tutorial on partially observable Markov decision processes   总被引:1,自引:0,他引:1  
The partially observable Markov decision process (POMDP) model of environments was first explored in the engineering and operations research communities 40 years ago. More recently, the model has been embraced by researchers in artificial intelligence and machine learning, leading to a flurry of solution algorithms that can identify optimal or near-optimal behavior in many environments represented as POMDPs. The purpose of this article is to introduce the POMDP model to behavioral scientists who may wish to apply the framework to the problem of understanding normative behavior in experimental settings. The article includes concrete examples using a publicly-available POMDP solution package.  相似文献   

19.
This paper describes the practical steps necessary to write logfiles for recording user actions in event-driven applications. Data logging has long been used as a reliable method to record all user actions, whether assessing new software or running a behavioral experiment. With the widespread introduction of event-driven software, the logfile must enable accurate recording of all the user’s actions, whether with the keyboard or another input device. Logging is only an effective tool when it can accurately and consistently record all actions in a format that aids the extraction of useful information from the mass of data collected. Logfiles are often presented as one of many methods that could be used, and here a technique is proposed for the construction of logfiles for the quantitative assessment of software from the user’s point of view.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号