共查询到20条相似文献,搜索用时 15 毫秒
1.
Confidence in a perceptual decision is a judgment about the quality of the sensory evidence. The quality of the evidence depends not only on its strength (‘signal’) but critically on its reliability (‘noise’), but the separate contribution of these quantities to the formation of confidence judgments has not been investigated before in the context of perceptual decisions. We studied subjective confidence reports in a multi-element perceptual task where evidence strength and reliability could be manipulated independently. Our results reveal a confidence paradox: confidence is higher for stimuli of lower reliability that are associated with a lower accuracy. We show that the subjects’ overconfidence in trials with unreliable evidence is caused by a reduced sensitivity to stimulus variability. Our results bridge between the investigation of miss-attributions of confidence in behavioral economics and the domain of simple perceptual decisions amenable to neuroscience research. 相似文献
2.
Previous studies about the effects of ageing on the episodic feeling-of-knowing (FOK) accuracy and its underlying processes have yielded conflicting results. Recent work suggests that using alternative measures to gamma correlations might allow more accurate and informative interpretations of metamemory performance in ageing. We therefore investigated this issue with a large sample of 59 young and 61 older participants using alternative signal-detection theory (SDT) measures. These measures (receiver operating characteristic curves and Brier score) are recommended in the literature and able to reveal the characteristic profile of impairment in ageing. Our results suggest that the FOK accuracy deficit observed in the literature arises from differences in memory performance. This observation provides a convenient explanation to the previous discrepancies and furthermore supports the interest of the use of SDT-derived measures in the frame of metamemory. 相似文献
3.
Processing time predictions of current models of perception in the classic additive factors paradigm
Robin D. Thomas 《Journal of mathematical psychology》2006,50(5):441-455
This article explores the consequences for factorial additivity in a Sternberg [(1969). The discovery of processing stages: Extensions of donders method In: W.G. Koster (Ed.), Attention and performance II, Acta Psychologica, 30, 276-315] additive-factors paradigm of the assumptions adopted by models of perception that relate the representation of a stimulus to decision time. Three example models, signal detection theory with the latency-distance hypothesis, stochastic general recognition theory, and a random walk model of exemplar classification, are interrogated to determine what type of interaction they predict factors will yield in a hypothetical factorial (choice) reaction time experiment in which the ‘empirical’ factors’ effects are manifest as parameter changes. All frameworks make the critical assumption that decision time depends on the perceptual representation of the stimulus as well as the architecture. As a consequence, nonadditivity of factors thought to affect different “stages” in the classical approach emerges within the current modeling approach. The nature of this influence is revealed through analytic investigations and simulation. Earlier empirical findings of failures of selective influence that have defied adequate explanation are reinterpreted in light of the present findings. 相似文献
4.
The stochastic model for the evolution of preferences proposed by Falmagne, Regenwetter, and Grofman [1997. Journal of Mathematical Psychology, 41, 129-143] and tested by Regenwetter, Falmagne, and Grofman [1999. Psychological Review, 106, 362-384], as well as the alternative Thurstonian model of Böckenholt [Falmagne, J.-C., Regenwetter, M., & Grofman, B. (1997). A stochastic model for the evolution of preferences. In A. A. J. Marley (Ed.), Choice, decision and measurement: Essays in honor of R. Duncan Luce (pp. 113-131). Mahwah, NJ: Lawrence Erlbaum.], gave a good statistical account of attitudinal panel data from the 1992 US presidential election. We show, however, that both models have the defect of underestimating the number of respondents who did not change their order of preference for the candidates across different polls. We present a generalization of Falmagne et al.'s model based on the idea that some individuals may become momentarily impervious to all matters related to the campaign and ‘tune out.’ This behavior could be triggered by some personal reason or by some external event related to the campaign. Like the original model, the resulting model is a random walk, but on an augmented set of states. A respondent in a ‘live’ state behaves as in the previous model, except when receiving a ‘tune-out’ token, which effectively freezes the respondent's preference state until it is reversed by a ‘tune-in’ token. We describe and successfully test the new model on the same 1992 National Election Study panel data as those used by Böckenholt (2002) and Regenwetter et al. (1999). 相似文献
5.
How do people derive meaning from numbers? Here, we instantiate the primary theories of numerical representation in computational models and compare simulated performance to human data. Specifically, we fit simulated data to the distributions for correct and incorrect responses, as well as the pattern of errors made, in a traditional “relative quantity” task. The results reveal that no current theory of numerical representation can adequately account for the data without additional assumptions. However, when we introduce repeated, error-prone sampling of the stimulus (e.g., Cohen, 2009) superior fits are achieved when the underlying representation of integers reflects linear spacing with constant variance. These results provide new insights into (i) the detailed nature of mental numerical representation, and, (ii) general perceptual processes implemented by the human visual system. 相似文献
6.
7.
Dylan Molenaar Maria Bolsinova Jeroen K. Vermunt 《The British journal of mathematical and statistical psychology》2018,71(2):205-228
In item response theory, modelling the item response times in addition to the item responses may improve the detection of possible between- and within-subject differences in the process that resulted in the responses. For instance, if respondents rely on rapid guessing on some items but not on all, the joint distribution of the responses and response times will be a multivariate within-subject mixture distribution. Suitable parametric methods to detect these within-subject differences have been proposed. In these approaches, a distribution needs to be assumed for the within-class response times. In this paper, it is demonstrated that these parametric within-subject approaches may produce false positives and biased parameter estimates if the assumption concerning the response time distribution is violated. A semi-parametric approach is proposed which resorts to categorized response times. This approach is shown to hardly produce false positives and parameter bias. In addition, the semi-parametric approach results in approximately the same power as the parametric approach. 相似文献
8.
I provided a more personal view of Wachtel's (1980) article. I began by discussing the extent to which my own research program complied with his distinctive recommendations. After offering a different take on the impact of high productivity, I focused on (a) the negative effects of the quest for extramural funding and (b) the positive effects of a better balance between theoretical and empirical contributions. I then turn to some of my own theoretical and empirical studies of the place that theory has in successful science. This research suggests that theory only has a beneficial effect when it is integrative in function and when it is closely constrained by available data. I end with a speculation regarding the value of having theories that are maximally formal, even mathematical. 相似文献
9.
How do people stretch their understanding of magnitude from the experiential range to the very large quantities and ranges important in science, geopolitics, and mathematics? This paper empirically evaluates how and whether people make use of numerical categories when estimating relative magnitudes of numbers across many orders of magnitude. We hypothesize that people use scale words—thousand, million, billion—to carve the large number line into categories, stretching linear responses across items within each category. If so, discontinuities in position and response time are expected near the boundaries between categories. In contrast to previous work (Landy, Silbert, & Goldin, 2013) that suggested only that a minority of college undergraduates employed categorical boundaries, we find that discontinuities near category boundaries occur in most or all participants, but that accurate and inaccurate participants respond in opposite ways to category boundaries. Accurate participants highlight contrasts within a category, whereas inaccurate participants adjust their responses toward category centers. 相似文献
10.
Preferences are often represented in terms of a function, in the deterministic case as well as in the probabilistic case. In the present paper we develop a new numerical representation of preference structures for which the strict preference relation (P) is without circuit but not necessarily transitive. Moreover, we investigate the consequences of the representation for the usual preference structures. In particular, we propose new formulations for the numerical representation of the interval order structure. 相似文献
11.
The paper provides conceptual clarifications for the issues related to the dependence of jointly distributed systems of random entities on external factors. This includes the theory of selective influence as proposed in Dzhafarov [(2003a). Selective influence through conditional independence. Psychometrika, 68, 7-26] and generalized versions of the notions of probabilistic causality [Suppes, P., & Zanotti, M. (1981). When are probabilistic explanations possible? Synthese, 48, 191-199] and dimensionality in the latent variable models [Levine, M. V. (2003). Dimension in latent variable models. Journal of Mathematical Psychology, 47, 450-466]. One of the basic observations is that any system of random entities whose joint distribution depends on a factor set can be represented by functions of two arguments: a single factor-independent source of randomness and the factor set itself. In the case of random variables (i.e., real-valued random entities endowed with Borel sigma-algebras) the single source of randomness can be chosen to be any random variable with a continuous distribution (e.g., uniformly distributed between 0 and 1). 相似文献
12.
Chris Smerecnik Marieke Quaak Constant P. van Schayck Frederik-Jan van Schooten Hein de Vries 《Psychology & health》2013,28(8):1099-1112
Genetic advances have made genetically tailored smoking cessation treatments possible. In this study, we examined whether smokers are interested in undergoing a genetic test to identify their genetic susceptibility to nicotine addiction. In addition, we aimed to identify socio-cognitive determinants of smokers’ intention to undergo genetic testing. Following the protection motivation theory (PMT), we assessed the following constructs using an online survey among 587 smokers: threat appraisal (i.e. susceptibility and severity), fear, coping appraisal (i.e. response efficacy and self-efficacy), response costs and intention. In addition, knowledge, social norms and information-seeking behaviour were measured. Mean intention rates were 2.57 on a 5-point scale. Intention was significantly associated with threat appraisal and coping appraisal, as predicted by the PMT. Fear of the outcome was negatively associated with the intention to undergo genetic testing, but response costs, knowledge and social influence were not. Intention to undergo genetic testing in turn was positively related to seeking information about genetic testing and genetically tailored smoking cessation treatments. Smokers seem ambivalent or ‘on the fence’ with regard to undergoing a genetic test for smoking addiction. Socio-cognitive concepts such as susceptibility, severity, response efficacy and self-efficacy may be used to inform or educate smokers about the value of genetically tailored smoking cessation treatments. 相似文献
13.
Lawrence T. DeCarlo 《Journal of mathematical psychology》2010,54(3):304-313
Basic results for conditional means and variances, as well as distributional results, are used to clarify the similarities and differences between various extensions of signal detection theory (SDT). It is shown that a previously presented motivation for the unequal variance SDT model (varying strength) actually leads to a related, yet distinct, model. The distinction has implications for other extensions of SDT, such as models with criteria that vary over trials. It is shown that a mixture extension of SDT is also consistent with unequal variances, but provides a different interpretation of the results; mixture SDT also offers a way to unify results found across several types of studies. 相似文献
14.
Michael D. Maraun Kathleen L. Slaney Stephanie M. Gabriel 《New Ideas in Psychology》2009,27(2):148-162
In his Investigations, Wittgenstein employed a quotation from Augustine to capture certain of the essential features of an incoherent conception of language that he believed was at root of many of the dominant theories of meaning of his day. It is argued in the current paper that this very same Augustinian conception of language (ACL) is the foundation of some of the most influential methodological orientations of present-day psychological science, and, as a result, these orientations suffer from a range of ACL-induced incoherences. This thesis is illustrated by way of a case study drawn from the construct validation literature. 相似文献
15.
To assess the effect of a manipulation on a response time distribution, psychologists often use Vincentizing or quantile averaging to construct group or “average” distributions. We provide a theorem characterizing the large sample properties of the averaged quantiles when the individual RT distributions all belong to the same location-scale family. We then apply the theorem to estimating parameters for the quantile-averaged distributions. From the theorem, it is shown that parameters of the group distribution can be estimated by generalized least squares. This method provides accurate estimates of standard errors of parameters and can therefore be used in formal inference. The method is benchmarked in a small simulation study against both a maximum likelihood method and an ordinary least-squares method. Generalized least squares essentially is the only method based on the averaged quantiles that is both unbiased and provides accurate estimates of parameter standard errors. It is also proved that for location-scale families, performing generalized least squares on quantile averages is formally equivalent to averaging parameter estimates from generalized least squares performed on individuals. A limitation on the method is that individual RT distributions must be members of the same location-scale family. 相似文献
16.
J. J McDowell Marcia L. Caron Saule Kulubekova John P. Berg 《Journal of the experimental analysis of behavior》2008,90(3):387-403
Virtual organisms animated by a computational theory of selection by consequences responded on symmetrical and asymmetrical concurrent schedules of reinforcement. The theory instantiated Darwinian principles of selection, reproduction, and mutation such that a population of potential behaviors evolved under the selection pressure exerted by reinforcement from the environment. The virtual organisms' steady‐state behavior was well described by the power function matching equation, and the parameters of the equation behaved in ways that were consistent with findings from experiments with live organisms. Together with previous research on single‐alternative schedules (McDowell, 2004; McDowell & Caron, 2007) these results indicate that the equations of matching theory are emergent properties of the evolutionary dynamics of selection by consequences. 相似文献
17.
In previous works, in which the topological model has been applied to martensitic phase transformations, the value of twist angle ω was determined based on the habit plane-(HP) matching method, where the physical realization of the so-predicted interfacial defect networks may require reorientations of defect line directions by short-range diffusion, though no long-range diffusion was needed. In the present work, a novel criterion for determining the optimum value of twist is proposed so that the predicted interface defects are not only able to fulfil the function of fully accommodating the coherency strains arising on the terrace plane, but also capable of reaching the required position at the HP without long- or short-range diffusions. A numerical analysis for an Fe–20Ni–5Mn alloy is demonstrated based on the newly proposed criterion, and the predictions so obtained are in good agreement with the results provided by the phenomenological theory and experimental measurements. 相似文献
18.
《Quarterly journal of experimental psychology (2006)》2013,66(11):2088-2098
Understanding fractions and decimals is difficult because whole numbers are the most frequently and earliest experienced type of number, and learners must avoid conceptualizing fractions and decimals in terms of their whole-number components (the “whole-number bias”). We explored the understanding of fractions, decimals, two-digit integers, and money in adults and 10-year-olds using two number line tasks: marking the line to indicate the target number, and estimating the numerical value of a mark on the line. Results were very similar for decimals, integers, and money in both tasks for both groups, demonstrating that the linear representation previously shown for integers is also evident for decimals already by the age of 10. Fractions seem to be “task dependent” so that when asked to place a fractional value on a line, both adults and children displayed a linear representation, while this pattern did not occur in the reverse task. 相似文献
19.
Goyon et al. [J. Goyon, A. Colin, G. Ovarlez, A. Ajdari and L. Bocquet, Nature 454 (2008) p. 84] have shown that nonlocal effects in the rheology of foams may be accounted for by a modification of the standard (Herschel–Bulkley) model. Here we consider the effects of this modification on the continuum theory of 2d shear localisation. We compute results for various examples, showing that the localisation length is increased, and explore the limiting cases of zero and infinite nonlocality length ξ. Velocity profiles are shown to take an exponential form in the case where ξ is large. As the formulation of the nonlocal continuum model presented in this article is general, it may also be directly applicable to other complex fluids. 相似文献