首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A comparison of sequential sampling models for two-choice reaction time   总被引:1,自引:0,他引:1  
The authors evaluated 4 sequential sampling models for 2-choice decisions--the Wiener diffusion, Ornstein-Uhlenbeck (OU) diffusion, accumulator, and Poisson counter models--by fitting them to the response time (RT) distributions and accuracy data from 3 experiments. Each of the models was augmented with assumptions of variability across trials in the rate of accumulation of evidence from stimuli, the values of response criteria, and the value of base RT across trials. Although there was substantial model mimicry, empirical conditions were identified under which the models make discriminably different predictions. The best accounts of the data were provided by the Wiener diffusion model, the OU model with small-to-moderate decay, and the accumulator model with long-tailed (exponential) distributions of criteria, although the last was unable to produce error RTs shorter than correct RTs. The relationship between these models and 3 recent, neurally inspired models was also examined.  相似文献   

2.
A pure extinction process memory retrieval latency model is completely characterized by the number of items to be recalled n and the recall rate λ. In this paper λ is allowed to vary across the population of subjects. Results are given for arbitrary individual differences distributions on λ. When λ is distributed gamma the results are very simple and method of moment estimators are easily obtained. In addition the shape parameter of the gamma individual differences distribution turns out to be an index of the degree of heterogeneity of the recall parameter. The pure extinction process model is also appropriate for certain collection and detection tasks.  相似文献   

3.
Gómez  C.  Ruiz-Adán  A.  Llosa  M.  Ruiz  G. 《The Psychological record》1992,42(2):273-284

Five rats were reinforced under variable-interval schedules with different average interreinforcement intervals (30 seC., 1 min, 2 min, and 4 min). Each animal was run only two sessions of each schedule. The interresponse times (IRTs) were recorded and analyzed. The autocorrelation function of the IRT series and of the IRT time series (number of responses per time interval) were calculated, and absence of periodicity in the subject’s behavior was demonstrated. Frequency distribution of IRTs showed in all cases a similar shape and could be fitted to a gamma probability density function in 60% of cases with a signification level of .01 (Kolmogorov-Smirnoff test). The frequency distributions of the IRT time series were distributed as a Poisson process with a .05 significance level. These results suggest that during variable-interval schedules the responses of the animal can be modeled as a random process characterized by a gamma distribution, as a first approximation.

  相似文献   

4.
In a recent paper, Bedrick derived the asymptotic distribution of Lord's modified sample biserial correlation estimator and studied its efficiency for bivariate normal populations. We present a more detailed examination of the properties of Lord's estimator and several competitors, including Brogden's estimator. We show that Lord's estimator is more efficient for three nonnormal distributions than a generalization of Pearson's sample biserial estimator. In addition, Lord's estimator is reasonably efficient relative to the maximum likelihood estimator for these distributions. These conclusions are consistent with Bedrick's results for the bivariate normal distribution. We also study the small sample bias and variance of Lord's estimator, and the coverage properties of several confidence interval estimates.The author would like to thank the referees for several suggestions that improved the paper.  相似文献   

5.
An algorithm described by Graybill (1969) factors a population correlation matrix, R, into an upper and lower triangular matrix, T and T′, such that R=T′T. The matrix T is used to generate multivariate data sets from a multinormal distribution. When this algorithm is used to generate data for nonnormal distributions, however, the sample correlations are systematically biased downward. We describe an iterative technique that removes this bias by adjusting the initial correlation matrix. R, factored by the Graybill algorithm. The method is illustrated by simulating a multivariate study by Mihal and Barrett (1976). Large-N simulations indicate that the iterative technique works: multivariate data sets generated with this approach successfully model both the univariate distributions of the individual variables and their multivariate structure (as assessed by intercorrelation and regression analyses).  相似文献   

6.
Observers attempted to detect a weak auditory signal presented in noise. The onset of the signal was determined by a Poisson process, and only responses occurring 1 sec after signal onset were considered detections. Three latency distributions were measured, the time to a signal onset, the reaction time distribution, and the false-alarm distribution (of responses occurring before a signal onset). A simple two-state model is proposed to account for a discrepancy between the distribution of signal waits and the distribution of false alarms. The hazard functions of the reaction time distributions are considered in detail and a simple accumulation model is proposed to account for the results.  相似文献   

7.
Relations between constructs are estimated based on correlations between measures of constructs corrected for measurement error. This process assumes that the true scores on the measure are linearly related to construct scores, an assumption that may not hold. We examined the extent to which differences in distribution shape reduce the correlation between true scores on a measure and scores on the underlying construct they are intended to measure. We found, via a series of Monte Carlo simulations, that when the actual construct distribution is normal, nonnormal distributions of true scores caused this correlation to drop by an average of only .02 across 15 conditions. When both construct and true score distributions assumed different combinations of nonnormal distributions, the average correlation was reduced by .05 across 375 conditions. We conclude that theory‐based scales intended to measure constructs usually correlate highly with the constructs they are constructed to measure. We show that, as a result, in most cases true score correlations only modestly underestimate correlations between different constructs. However, in cases in which the two constructs are redundant, this underestimation can lead to the false conclusion that the constructs are ‘correlated but distinct constructs,’ resulting in construct proliferation.  相似文献   

8.
9.
I describe a technique for comparing two simple accounts of a distribution of response times: A mixture model and a generalized-shift model. In the mixture model, a target distribution is assumed to be a mixture of response times from two other (reference) distributions. In the generalized-shift model, the target distribution is assumed to be a quantile average of the reference distributions. In order to distinguish these two possibilities, quantiles for the target distribution are estimated from the quantiles of the reference distributions assuming either a shift or a mixture, and the predicted quantiles are used to calculate the multinomial likelihood of the obtained data. Monte Carlo simulations reported here demonstrate that the index is relatively unbiased, is effective with moderate sample sizes and modest spreads between the reference distributions, is relatively unaffected by changes in the number of bins or by data trimming, can be used with data aggregated across subjects, and is relatively insensitive to a range of subject variations in distribution shape and in mixture or shift proportion. As an illustration, the index is applied to the interpretation of three effects from distinct paradigms: residual switch costs in the task-switching paradigm, the psychological refractory period effect, and sequential effects in the Simon task. I conclude that the multinomial likelihood index provides a useful and easily applied tool for the interpretation of effects on response time distributions.  相似文献   

10.
The relation between item difficulty distributions and the validity and reliability of tests is computed through use of normal correlation surfaces for varying numbers of items and varying degrees of item intercorrelations. Optimal or near optimal item difficulty distributions are thus identified for various possible item difficulty distributions. The results indicate that, if a test is of conventional length, is homogeneous as to content, and has a symmetrical distribution of item difficulties, correlation with a normally distributed perfect measure of the attribute common to the items does not vary appreciably with variation in the item difficulty distribution. Greater variation was evident in correlation with a second duplicate test (reliability). The general implications of these findings and their particular significance for evaluating techniques aimed at increasing reliability are considered.  相似文献   

11.
If the discriminal distributions of signal-detectability theory evolve in time according to a normal Markov process, they can be characterized by Brownian motion generalized with a constant bias determined by signal strength. If the process is stopped at the first occurrence of a preset criterion displacement, the resulting latency distribution provides a model for the central component of simple reaction time. Discussed are properties of the distribution which should be useful in obtaining experimental predictions from neural-counting assumptions, and in relating reaction times to basic variables of the theory of signal-detectability.This paper was written while the author was on a post-doctoral fellowship at the Mental Health Research Institute, University of Michigan, supported by U.S.P.H.S. Training grant T01-MH-7417-07.  相似文献   

12.
13.
D. H. Taylor 《Psychometrika》1965,30(2):157-163
This paper presents an adaptation of the method of moments for comparing observed and theoretical distributions of reaction time. By using cumulants in place of moments, considerable simplification of the treatment of convoluted distributions is obtained, particularly if one of the components is normally distributed. Stochastic latency models are often poorly fitted by reaction time data. This may be because a simple latency distribution is convoluted with a normal or high-order gamma distribution. The comparison method described will assist investigation of this and other interpretations of reaction time distributions.This work forms part of a project on Information in skill situations supported by the Department of Scientific and Industrial Research. The paper is in part fulfilment of the requirement for the degree of Doctor of Philosophy at Reading University.  相似文献   

14.
The attention literature distinguishes two general mechanisms by which attention can benefit performance: gain (or resource) models and orienting (or switching) models. In gain models, processing efficiency is a function of a spatial distribution of capacity or resources; in orienting models, an attentional spotlight must be aligned with the stimulus location, and processing efficiency is a function of when this occurs. Although they involve different processing mechanisms, these models are difficult to distinguish empirically. We compared performance with abrupt-onset and no-onset Gabor patch stimuli in a cued detection task in which we obtained distributions of reaction time (RT) and accuracy as a function of stimulus contrast. In comparison to abrupt-onset stimuli, RTs to miscued no-onset stimuli were increased and accuracy was reduced. Modeling the data with the integrated system model of Philip L. Smith and Roger Ratcliff (2009) provided evidence for reallocation of processing resources during the course of a trial, consistent with an orienting account. Our results support a view of attention in which processing efficiency depends on a dynamic spatiotemporal distribution of resources that has both gain and orienting properties.  相似文献   

15.
Two experiments investigated the effect of observing responses that enabled college students to emit more efficient distributions of reinforced responses. In Experiment 1, the gains of response efficiency enabled by observing were minimized through use of identical low-effort response requirements in two alternating variable-interval schedules. These comprised a mixed schedule of reinforcement; they differed in the number of money-backed points per reinforcer. In each of three choices between two stimuli that varied in their correlation with the variable-interval schedules, the results showed that subjects preferred stimuli that were correlated with the larger average amount of reinforcement. This is consistent with a conditioned-reinforcement hypothesis. Negative informative stimuli--that is, stimuli correlated with the smaller of two rewards--did not maintain as much observing as stimuli that were uncorrelated with amount of reward. In Experiment 2, savings in effort made possible by producing S- were varied within subjects by alternately removing and reinstating the response-reinforcement contingency in a mixed variable-interval/extinction schedule of reinforcement. Preference for an uncorrelated stimulus compared to a negative informative stimulus (S-) decreased for each of six subjects, and usually reversed when observing permitted a more efficient temporal distribution of the responses required for reinforcement; in this case, the responses were pulls on a relatively high-effort plunger. When observing the S- could not improve response efficiency, subjects again chose the control stimulus. All of these results were inconsistent with the uncertainty-reduction hypothesis.  相似文献   

16.
Expert musicians are able to time their actions accurately and consistently during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises across different techniques and sensory modalities. We first compared various reproduction strategies and interval length, to examine the effects in general and to optimise experimental conditions for testing the effect of music, and found that the effects were robust and consistent across different paradigms. Focussing on a ‘ready-set-go’ paradigm subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall, Musicians performed more veridical than Non-Musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However, Non-Musicians, particularly with visual stimuli, consistently exhibited a substantial and systematic regression towards the mean interval. When subjects judged intervals from distributions of longer total length they tended to regress more towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model that minimizes reproduction errors by incorporating a central tendency prior weighted by the subject's own temporal precision relative to the current distribution of intervals. Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to different task conditions to minimise temporal estimation errors.  相似文献   

17.
Student's one-sample t-test is a commonly used method when inference about the population mean is made. As advocated in textbooks and articles, the assumption of normality is often checked by a preliminary goodness-of-fit (GOF) test. In a paper recently published by Schucany and Ng it was shown that, for the uniform distribution, screening of samples by a pretest for normality leads to a more conservative conditional Type I error rate than application of the one-sample t-test without preliminary GOF test. In contrast, for the exponential distribution, the conditional level is even more elevated than the Type I error rate of the t-test without pretest. We examine the reasons behind these characteristics. In a simulation study, samples drawn from the exponential, lognormal, uniform, Student's t-distribution with 2 degrees of freedom (t(2) ) and the standard normal distribution that had passed normality screening, as well as the ingredients of the test statistics calculated from these samples, are investigated. For non-normal distributions, we found that preliminary testing for normality may change the distribution of means and standard deviations of the selected samples as well as the correlation between them (if the underlying distribution is non-symmetric), thus leading to altered distributions of the resulting test statistics. It is shown that for skewed distributions the excess in Type I error rate may be even more pronounced when testing one-sided hypotheses.  相似文献   

18.
When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust \(\chi ^2\) and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.  相似文献   

19.
We propose a new method for quickly calculating the probability density function for first-passage times in simple Wiener diffusion models, extending an earlier method used by [Van Zandt, T., Colonius, H., & Proctor, R. W. (2000). A comparison of two response-time models applied to perceptual matching. Psychonomic Bulletin & Review, 7, 208-256]. The method relies on the observation that there are two distinct infinite series expansions of this probability density, one of which converges quickly for small time values, while the other converges quickly at large time values. By deriving error bounds associated with finite truncation of either expansion, we are able to determine analytically which of the two versions should be applied in any particular context. The bounds indicate that, even for extremely stringent error tolerances, no more than 8 terms are required to calculate the probability density. By making the calculation of this distribution tractable, the goal is to allow more complex extensions of Wiener diffusion models to be developed.  相似文献   

20.
We propose an improved method for calculating the cumulative first-passage time distribution in Wiener diffusion models with two absorbing barriers. This distribution function is frequently used to describe responses and error probabilities in choice reaction time tasks. The present work extends related work on the density of first-passage times [Navarro, D.J., Fuss, I.G. (2009). Fast and accurate calculations for first-passage times in Wiener diffusion models. Journal of Mathematical Psychology, 53, 222–230]. Two representations exist for the distribution, both including infinite series. We derive upper bounds for the approximation error resulting from finite truncation of the series, and we determine the number of iterations required to limit the error below a pre-specified tolerance. For a given set of parameters, the representation can then be chosen which requires the least computational effort.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号