首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
According to a well-known theorem in psychophysics (Green & Swets, 1966), the area under the receiver operating characteristic (ROC) for the yes-no paradigm equals the proportion of correct responses of an unbiased observer in the two-interval, two-alternative, forced choice paradigm (2I2AFC). Here, we demonstrate a similar relationship between the ROC area in the two-interval same-different (AX or 2IAX) paradigm, and the proportion correct in the four-interval same-different (4IAX, also known as dual-pair comparison) paradigm. The theorem demonstrated here is general, in the sense that it does not require that the sensory observations have a specific distribution (e.g., Gaussian), or that they be statistically independent.  相似文献   

2.
According to a well-known theorem in psychophysics (Green & Swets, 1966), the area under the receiver operating characteristic (ROC) for the yes—no paradigm equals the proportion of correct responses of an unbiased observer in the two-interval, two-alternative, forced choice paradigm (2I2AFC). Here, we demonstrate a similar relationship between the ROC area in the two-interval same—different (AX or 2IAX) paradigm, and the proportion correct in the four-interval same—different (4IAX, also known as dual-pair comparison) paradigm. The theorem demonstrated here is general, in the sense that it does not require that the sensory observations have a specific distribution (e.g., Gaussian), or that they be statistically independent.  相似文献   

3.
In this paper we derive the optimum (likelihood-ratio) decision statistic for asame-different paradigm. The likelihood ratio is dependent on the degree of correlation between the two observations on each trial. For the two extreme cases in which the observations are either independent or highly correlated, the optimum decision rule is identical to each of two previously suggested decision rules. For these two cases, the receiver-operating characteristic (ROC) curves are calculated. Finally, an experimental procedure is suggested for assessing the decision rule actually used by the observer in asame-different task.  相似文献   

4.
This paper presents the optimum decision rule for an m-interval oddity task in whichm-1 intervals contain the same signal and one is different or odd. The optimum decision rule depends on the degree of correlation among observations. The present approach unifies the different strategies that occur with “roved” or “fixed” experiments (Macmillan & Creelman, 1991, p. 147). It is shown that the commonly used decision rule for anm-interval oddity task corresponds to the special case of highly correlated observations. However, as is also true for thesame-different paradigm, there exists a different optimum decision rule when the observations are independent. The relation between the probability of a correct response andd’ is derived for the three-interval oddity task. Tables are presented of this relation for the three-, four-, and five-interval oddity task. Finally, an experimental method is proposed that allows one to determine the decision rule used by the observer in an oddity experiment.  相似文献   

5.
Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1)?the ability to find the best observed solution, (2)?settling on a solution that classifies observations correctly, (3)?the number of local solutions found by each technique, and (4)?the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.  相似文献   

6.
This article examines decision processes in the perception and categorization of stimuli constructed from one or more components. First, a general perceptual theory is used to formally characterize large classes of existing decision models according to the type of decision boundary they predict in a multidimensional perceptual space. A new experimental paradigm is developed that makes it possible to accurately estimate a subject's decision boundary in a categorization task. Three experiments using this paradigm are reported. Three conclusions stand out: (a) Subjects adopted deterministic decision rules, that is, for a given location in the perceptual space, most subjects always gave the same response; (b) subjects used decision rules that were nearly optimal; and (c) the only constraint on the type of decision bound that subjects used was the amount of cognitive capacity it required to implement. Subjects were not constrained to make independent decisions on each component or to attend to the distance to each prototype.  相似文献   

7.
A model for the multiple dual-pair method, a generalization of the traditional dual-pair (4IAX) paradigm, is given. This model is expressed in terms of normal and beta distributions. This generalization allows for the simultaneous estimation of the perceptual distances among three or more stimuli. This model has applications in cases in which multiple two-sample comparisons would be too time consuming and labor intensive. The theory discussed shows how unequal variances can be estimated on the basis of results from that method. Two numerical examples that illustrate the ability of the beta distribution-based model to retrieve the appropriate parameters are given. It is also shown how the traditional dual-pair model is a special case of the multiple dual-pair model.  相似文献   

8.
A model for the multiple dual-pair method, a generalization of the traditional dual-pair (4IAX) paradigm, is given. This model is expressed in terms of normal and beta distributions. This generalization allows for the simultaneous estimation of the perceptual distances among three or more stimuli. This model has applications in cases in which multiple two-sample comparisons would be too time consuming and labor intensive. The theory discussed shows how unequal variances can be estimated on the basis of results from that method. Two numerical examples that illustrate the ability of the beta distribution-based model to retrieve the appropriate parameters are given. It is also shown how the traditional dual-pair model is a special case of the multiple dual-pair model.  相似文献   

9.
In a recognition memory experiment, Mickes, Wixted, and Wais (2007) reported that distributional statistics computed from ratings made using a 20-point confidence scale (which showed that the standard deviation of the ratings made to lures was approximately 0.80 times that of the targets) essentially matched the distributional statistics estimated indirectly by fitting a Gaussian signal-detection model to the receiver-operating characteristic (ROC). We argued that the parallel results serve to increase confidence in the Gaussian unequal-variance model of recognition memory. Rouder, Pratte, and Morey (2010) argue that the results are instead uninformative. In their view, parametric models of latent memory strength are not empirically distinguishable. As such, they argue, our conclusions are arbitrary, and parametric ROC analysis should be abandoned. In an attempt to demonstrate the inherent untestability of parametric models, they describe a non-Gaussian equal-variance model that purportedly accounts for our findings just as well as the Gaussian unequal-variance model does. However, we show that their new model—despite being contrived after the fact and in full view of the to-be-explained data—does not account for the results as well as the unequal-variance Gaussian model does. This outcome manifestly demonstrates that parametric models are, in fact, testable. Moreover, the results differentially favor the Gaussian account over the probit model and over several other reasonable distributional forms (such as the Weibull and the lognormal).  相似文献   

10.
Curriculum based measurement of oral reading (CBM-R) is used to monitor the effects of academic interventions for individual students. Decisions to continue, modify, or terminate these interventions are made by interpreting time series CBM-R data. Such interpretation is founded upon visual analysis or the application of decision rules. The purpose of this study was to compare the accuracy of visual analysis and decision rules. Visual analysts interpreted 108 CBM-R progress monitoring graphs one of three ways: (a) without graphic aids, (b) with a goal line, or (c) with a goal line and a trend line. Graphs differed along three dimensions, including trend magnitude, variability of observations, and duration of data collection. Automated trend line and data point decision rules were also applied to each graph. Inferential analyses permitted the estimation of the probability of a correct decision (i.e., the student is improving – continue the intervention, or the student is not improving – discontinue the intervention) for each evaluation method as a function of trend magnitude, variability of observations, and duration of data collection. All evaluation methods performed better when students made adequate progress. Visual analysis and decision rules performed similarly when observations were less variable. Results suggest that educators should collect data for more than six weeks, take steps to control measurement error, and visually analyze graphs when data are variable. Implications for practice and research are discussed.  相似文献   

11.
This study attempts to validate previously developed, empirically based Minnesota Multiphasic Personality Inventory (MMPI) decision rules (Keane, Malloy, & Fairbank, 1984) to aid in the diagnosis of combat-related posttraumatic stress disorder (PTSD). Four groups of 21 subjects each were identified: PTSD, psychotic, depressed, and chronic pain. A decision rule based on the standard clinical scales resulted in a correct classification rate (PTSD vs. non-PTSD) of 81% across the four-group sample. An empirically derived MMPI PTSD scale resulted in a correct classification rate of 77%. However, 43% of the PTSD subjects were incorrectly classified as non-PTSD by these rules. Independent, blind sorting of the 84 MMPI profiles by two doctoral-level clinical psychologists resulted in "hit rates" similar to the MMPI decision rules. The present results suggest that the previously derived, empirically based MMPI decision rules for PTSD do scarcely better than chance on correct classification of individuals with PTSD. We suggest that the differential diagnosis of PTSD is difficult because of the wide variety of symptoms in common with other diagnostic groups, and hence the variability of PTSD subjects on psychometric measures. We also suggest that the MMPI decision rules of Keane et al. (1984) may have utility in identifying subgroup(s) of combat-related PTSDs.  相似文献   

12.
Categorization and identification decision processes were examined and compared in 4 separate experiments. In all tasks, the critical stimulus component was a line that varied across trials in length and orientation, and the optimal decision rules were always complex piecewise quadratic functions. Evidence was found that identification is mediated by separate explicit and implicit systems. In addition, a common type of suboptimality was found in both categorization and identification. In particular, observers apparently approximated the piecewise quadratic functions of the optimal decision rules with simpler piecewise linear functions. A computational model, which was motivated by a recent neuropsychological theory of category learning, successfully accounted for this suboptimal performance in both categorization and identification. The model assigns a key role to the striatum and assumes the observed suboptimality was largely due to massive convergence of visual cortical cells onto single striatal units.  相似文献   

13.
任赫  黄颖诗  陈平 《心理科学进展》2022,30(5):1168-1182
计算机化分类测验(Computerized Classification Testing, CCT)能够高效地对被试进行分类, 已广泛应用于合格性测验及临床心理学中。作为CCT的重要组成部分, 终止规则决定测验何时停止以及将被试最终划分到何种类别, 因此直接影响测验效率及分类准确率。已有的三大类终止规则(似然比规则、贝叶斯决策理论规则及置信区间规则)的核心思想分别为构造假设检验、设计损失函数和比较置信区间相对位置。同时, 在不同测验情境下, CCT的终止规则发展出不同的具体形式。未来研究可以继续开发贝叶斯规则、考虑多维多类别情境以及结合作答时间和机器学习算法。针对测验实际需求, 三类终止规则在合格性测验上均有应用潜力, 而临床问卷则倾向应用贝叶斯规则。  相似文献   

14.
The standard signal detection theory (SDT) approach to m-alternative forced choice uses the proportion correct as the outcome variable and assumes that there is no response bias. The assumption of no bias is not made for theoretical reasons, but rather because it simplifies the model and estimation of its parameters. The SDT model for mAFC with bias is presented, with the cases of two, three, and four alternatives considered in detail. Two approaches to fitting the model are noted: maximum likelihood estimation with Gaussian quadrature and Bayesian estimation with Markov chain Monte Carlo. Both approaches are examined in simulations. SAS and OpenBUGS programs to fit the models are provided, and an application to real-world data is presented.  相似文献   

15.
An optimal decision strategy for deciding whether two things are the same or different is to adopt a likelihood-ratio criterion. The parametric equations for the receiver operating characteristic (ROC) based on the likelihood-ratio strategy when observations are independent are complicated; they require the numerical evaluation of a double integral. An approximation to the parametric equations for the likelihood-ratio strategy was developed. This approximation takes the form of a pair of equations that describe ROCs virtually indistinguishable from those of the full model.  相似文献   

16.
Fault trees are used to organize potential causes of a problem to facilitate better judgments about potential problem solutions. However, fault trees can lead to biased judgments because decision makers tend to overestimate the likelihood of problem causes that are explicitly mentioned in the fault tree and underestimate the likelihood of problem causes that are not. In this research, we examined the impact of context information and need for cognitive closure on these estimates. In 2 experiments, participants with a low need for cognitive closure used the informational content of experimenter provided and self‐generated context information as a basis for making likelihood estimates. In contrast, participants with a high need for closure did not use experimenter provided context information at all but used the ease of producing self‐generated context information (rather than informational content) as a basis for their likelihood estimates.  相似文献   

17.
A solution is presented for an internal multidimensional unfolding problem in which all the judgments of a rectangular proximity matrix are a function of a single-ideal object. The solution is obtained by showing that when real and ideal objects are represented by normal distributions in a multidimensional Euclidean space, a vector of distances among a single-ideal and multiple real objects follows a multivariate quadratic form in normal variables distribution. An approximation to the vector's probability density function (PDF) is developed which allows maximum likelihood (ML) solutions to be estimated. Under dependent sampling, the likelihood function contains information about the parametric distances among real object pairs, permitting the estimation of single-ideal solutions and leading to more robust multiple-ideal solutions. Tests for single- vs. multiple-ideal solutions and dependent vs. independent sampling are given. Properties of the proposed model and parameter recovery are explored. Empirical illustrations are also provided.  相似文献   

18.
19.
The continuous strength model of recognition memory was evaluated in a task where Ss were tested for recognition of 10-number lists using a rating procedure. Maximum likelihood estimates of the parameters of the model were obtained by an iterative method on a high-speed computer, and a chi-square goodness-of-fit test was performed for individual Ss. For 15 of 20 Ss, the chi-square values were nonsignificant, p < .05, indicating that the model provided a good fit to the data. Although the model gave a good fit to the data, the Δm measure of sensitivity was highly correlated with a true recognition score computed by subtracting false alarmsfrom correct recognitions.  相似文献   

20.
To evaluate a model of top-down gain control in the auditory system, 6 participants were asked to identify 1-kHz pure tones differing only in intensity. There were three 20-session conditions: (1) four soft tones (25, 30, 35, and 40 dB SPL) in the set; (2) those four soft tones plus a 50-dB SPL tone; and (3) the four soft tones plus an 80-dB SPL tone. The results were well described by a top-down, nonlinear gain-control system in which the amplifier’s gain depended on the highest intensity in the stimulus set. Individual participants’ identification judgments were generally compatible with an equal-variance signal-detection model in which the mean locations of the distribution of effects along the decision axis were determined by the operation of this nonlinear amplification system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号