首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Pseudocontingencies (PCs) allow for inferences about the contingency between two variables X and Y when the conditions for genuine contingency assessment are not met. Even when joint observations X i and Y i about the same reference objects i are not available or are detached in time or space, the correlation r(X i ,Y i ) is readily inferred from base rates. Inferred correlations are positive (negative) if X and Y base rates are skewed in the same (different) directions. Such PC inferences afford useful proxies for actually existing contingencies. While previous studies have focused on PCs due to environmental base rates, the present research highlights memory organization as a natural source of PC effects. When information about two attributes X and Y is represented in a hierarchically organized categorical memory code, as category-wise base rates p(X) and p(Y), the reconstruction of item-level information from category base rates will naturally produce PC effects. Three experiments support this contention. When the yes base rates of two respondents in four questionnaire subscales (categories) were correlated, recalled and predicted item-level responses were correlated in the same direction, even when the original responses to specific items within categories were correlated in the opposite direction.  相似文献   

2.
The present experiment explores the effects of the response (1-sec occupancy of a target area in an open field)—reinforcer (intracranial stimulation) contingency on time allocation in the open field in rats. The probability of reinforcement given a response (X) and the probability of reinforcement given the absence of a response (Y) were varied randomly across sessions within a subject. The following (X, Y) values were utilized: (.05, 0), (.15, 0), (.25, 0), (.15, .05), and (.15, .15). The results of this experiment indicate that rate of acquisition of time allocation preference is uniformly rapid during all contingency treatments wherein Y = 0 and is negatively related to the value of Y when X = .15. The relationship between the asymptote of the time allocation acquisition function and the value of X (when Y = 0) is positively sloped and negatively accelerated, while the relationship between asymptote and the value of Y (when X = .15) is negatively sloped with zero acceleration. Proposed contingency metrics are evaluated.  相似文献   

3.
A class of simple problem solving tasks requiring fast accurate solutions is introduced. In an experiment subjects memorized a mapping rule represented by lists of words labeled by cue words and made true/false decisions about conjunctions of propositions of the form “Y is in the list labeled by X”, written “XY”. Response times are analyzed using a “stage modeling” technique where problem solving algorithms are composed using a small set of psychological operations that have real time characteristics specified parametrically. The theoretical analysis shows that response time performance is adequately described in terms of the sequential application of elementary psychological operations. Unexpectedly, it was found that the proposition “XYandXZ” was verified as quickly as the apparently simpler “XY”. A case is presented for the modeling technique as applied to memory and problem solving tasks in terms of theoretical parsimony, statistical simplicity, and flexibility in investigative empirical research. Suggestions are made as to possible theoretical relations among fast problem solving, more complex and slower problem solving, and research in fundamental memory processes.  相似文献   

4.
A mathematical model is described based on the first order system transfer function in the form Y=B3∗exp(−B2∗(X−1))+B4∗(1−exp(−B2∗(X−1))), where X is the learning session number; Y is the quantity of errors, B2 is the learning rate, B3 is resistance to learning and B4 is ability to learn. The model is tested in a light-dark discrimination learning task in a 3-arm radial maze using Wistar and albino rats. The model provided good fits of experimental data under acquisition and reacquisition, and was able to detect strain differences among Wistar and albino rats. The model was compared to Rescorla-Wagner, and was found to be mutually complementary. Comparisons with Tulving’s logarithmic function and Valentine’s hyperbola and the arc cotangent functions are also provided. Our model is valid for fitting averaged group data, if averaging is applied to a subgroup of subjects possessing individual learning curves of an exponential shape.  相似文献   

5.
Binary multinomial processing tree (MPT) models parameterize the multinomial distribution over a set of J categories, such that each of its parameters, θ1,θ2,…,θS, is functionally independent and free to vary in the interval [0,1]. This paper analyzes binary MPT models subject to parametric order-constraints of the form 0?θs?θt?1. Such constraints arise naturally in multi-trial learning and memory paradigms, where some parameters representing cognitive processes would naturally be expected to be non-decreasing over learning trials or non-increasing over forgetting trials. The paper considers the case of one or more, non-overlapping linear orders of parametric constraints. Several ways to reparameterize the model to reflect the constraints are presented, and for each it is shown how to construct a new binary MPT that has the same number of parameters and is statistically equivalent to the original model with the order constraints. The results both extend the mathematical analysis of the MPT class as well as offering an approach to order restricted inference at the level of the entire class. An empirical example of this approach is provided.  相似文献   

6.
It is easy to construct pairs of sentences X, Y that lead many people to ascribe higher probability to the conjunction X-and-Y than to the conjuncts X, Y. Whether an error is thereby committed depends on reasoners’ interpretation of the expressions “probability” and “and.” We report two experiments designed to clarify the normative status of typical responses to conjunction problems.  相似文献   

7.
The class of multinomial processing tree (MPT) models has been used extensively in cognitive psychology to model latent cognitive processes. Critical for the usefulness of a MPT model is its psychological validity. Generally, the validity of a MPT model is demonstrated by showing that its parameters are selectively and predictably affected by theoretically meaningful experimental manipulations. Another approach is to test the convergent validity of the model parameters and other extraneous measures intended to measure the same cognitive processes. Here, we advance the concept of construct validity (Cronbach & Meehl, 1955 Cronbach, L. J. and Meehl, P. E. 1955. Construct validity in psychological tests. Psychological Bulletin, 52: 281302. [Crossref], [PubMed], [Web of Science ®] [Google Scholar]) as a criterion for model validity in MPT modelling and show how this approach can be fruitfully utilized using the example of a MPT model of event-based prospective memory. For that purpose, we investigated the convergent validity of the model parameters and established extraneous measures of prospective memory processes over a range of experimental settings, and we found a lack of convergent validity between the two indices. On a conceptual level, these results illustrate the importance of testing convergent validity. Additionally, they have implications for prospective memory research, because they demonstrate that the MPT model of event-based prospective memory is not able to differentiate between different processes contributing to prospective memory performance.  相似文献   

8.
Multinomial processing tree (MPT) models are a class of measurement models that account for categorical data by assuming a finite number of underlying cognitive processes. Traditionally, data are aggregated across participants and analyzed under the assumption of independently and identically distributed observations. Hierarchical Bayesian extensions of MPT models explicitly account for participant heterogeneity by assuming that the individual parameters follow a continuous hierarchical distribution. We provide an accessible introduction to hierarchical MPT modeling and present the user-friendly and comprehensive R package TreeBUGS, which implements the two most important hierarchical MPT approaches for participant heterogeneity—the beta-MPT approach (Smith & Batchelder, Journal of Mathematical Psychology 54:167-183, 2010) and the latent-trait MPT approach (Klauer, Psychometrika 75:70-98, 2010). TreeBUGS reads standard MPT model files and obtains Markov-chain Monte Carlo samples that approximate the posterior distribution. The functionality and output are tailored to the specific needs of MPT modelers and provide tests for the homogeneity of items and participants, individual and group parameter estimates, fit statistics, and within- and between-subjects comparisons, as well as goodness-of-fit and summary plots. We also propose and implement novel statistical extensions to include continuous and discrete predictors (as either fixed or random effects) in the latent-trait MPT model.  相似文献   

9.
Making judgments by relying on beliefs about the causal relationships between events is a fundamental capacity of everyday cognition. In the last decade, Causal Bayesian Networks have been proposed as a framework for modeling causal reasoning. Two experiments were conducted to provide comprehensive data sets with which to evaluate a variety of different types of judgments in comparison to the standard Bayesian networks calculations. Participants were introduced to a fictional system of three events and observed a set of learning trials that instantiated the multivariate distribution relating the three variables. We tested inferences on chains X1  Y  X2, common cause structures X1  Y  X2, and common effect structures X1  Y  X2, on binary and numerical variables, and with high and intermediate causal strengths. We tested transitive inferences, inferences when one variable is irrelevant because it is blocked by an intervening variable (Markov Assumption), inferences from two variables to a middle variable, and inferences about the presence of one cause when the alternative cause was known to have occurred (the normative “explaining away” pattern). Compared to the normative account, in general, when the judgments should change, they change in the normative direction. However, we also discuss a few persistent violations of the standard normative model. In addition, we evaluate the relative success of 12 theoretical explanations for these deviations.  相似文献   

10.
The present experiment explores the effects of the response (1-sec occupancy of a target area in an open field)-reinforcer (intracranial stimulation) contingency on time allocation in the open field in rats. The probability of reinforcement given response (X) and the probability of reinforcement given nonresponse (Y) were varied randomly across sessions within a subject. The 21 contingency treatments explored included all possible combinations of values (0, .1, .2, .3, .4, .5) of X and Y such that XY. The results indicate that rate of acquisition and asymptotic level of time allocation preference to the target area are negatively related to the value of Y (for any given value of X). Variations in X (for any given value of Y) were less effective. Evaluation of proposed contingency metrics revealed that the Weber fraction (XY)/X most closely approximates performance, and that the value of the difference detection threshold derived from the Weber fraction is a constant.  相似文献   

11.
Abstract

Inference of variance components in linear mixed modeling (LMM) provides evidence of heterogeneity between individuals or clusters. When only nonnegative variances are allowed, there is a boundary (i.e., 0) in the variances’ parameter space, and regular inference statistical procedures for such a parameter could be problematic. The goal of this article is to introduce a practically feasible permutation method to make inferences about variance components while considering the boundary issue in LMM. The permutation tests with different settings (i.e., constrained vs. unconstrained estimation, specific vs. generalized test, different ways of calculating p values, and different ways of permutation) were examined with both normal data and non-normal data. In addition, the permutation tests were compared to likelihood ratio (LR) tests with a mixture of chi-squared distributions as the reference distribution. We found that the unconstrained permutation test with the one-sided p-value approach performed better than the other permutation tests and is a useful alternative when the LR tests are not applicable. An R function is provided to facilitate the implementation of the permutation tests, and a real data example is used to illustrate the application. We hope our results will help researchers choose appropriate tests when testing variance components in LMM.  相似文献   

12.
People often test hypotheses about two variables (X andY), each with two levels (e.g.,X1 andX2). When testing “IfX1, thenY1,” observing the conjunction ofX1 andY1 is overwhelmingly perceived as more supportive than observing the conjunction ofX2 andY2, although both observations support the hypothesis. Normatively, theX2&Y2 observation provides stronger support than theX1&Y1 observation if the former is rarer. Because participants in laboratory settings typically test hypotheses they are unfamiliar with, previous research has not examined whether participants are sensitive to the rarity of observations. The experiment reported here showed that participants were sensitive to rarity, even judging a rareX2&Y2 observation more supportive than a commonX1&Y1 observation under certain conditions. Furthermore, participants’ default strategy of judgingX1&Y1 observations more informative might be generally adaptive because hypotheses usually regard rare events.  相似文献   

13.
Blockage contraction is an operation of belief contraction that acts directly on the outcome set, i.e. the set of logically closed subsets of the original belief set K that are potential contraction outcomes. Blocking is represented by a binary relation on the outcome set. If a potential outcome X blocks another potential outcome Y, and X does not imply the sentence p to be contracted, then Y?≠?K ÷ p. The contraction outcome K ÷ p is equal to the (unique) inclusion-maximal unblocked element of the outcome set that does not imply p. Conditions on the blocking relation are specified that ensure the existence of such a unique inclusion-maximal set for all sentences p. Blockage contraction is axiomatically characterized and its relations to AGM-style operations are investigated. In a finite-based framework, every transitively relational partial meet contraction is also a blockage contraction.  相似文献   

14.
Researchers in comparative psychology often use different food rewards in their studies, with food values defined by a pre-experimental preference test. While this technique rank orders food values, it provides limited information about value differences because preferences may reflect not only value differences, but also the degree to which one good may “substitute” for another (e.g., one food may substitute well for another food, but neither substitutes well for water). We propose scaling the value of food pairs by a third food that is less substitutable for either food offered in preference tests (cross-modal scaling). Here, Cebus monkeys chose between four pairwise alternatives: fruits A versus B; cereal amount X versus fruit A and cereal amount Y versus fruit B where X and Y were adjusted to produce indifference between each cereal amount and each fruit; and cereal amounts X versus Y. When choice was between perfect substitutes (different cereal amounts), preferences were nearly absolute; so too when choice was between close substitutes (fruits); however, when choice was between fruits and cereal amounts, preferences were more modest and less likely due to substitutability. These results suggest that scaling between-good value differences in terms of a third, less-substitutable good may be better than simple preference tests in defining between-good value differences.  相似文献   

15.
Estimates about uncertain quantities can be expressed in terms of lower limits (more than X, minimum X), or upper limits (less than Y, maximum Y). It has been shown that lower limit statements generally occur much more often than upper limit statements (Halberg & Teigen, 2009). However, in a conversational context, preferences for upper and lower limit statements will be moderated by the concerns of the interlocutors. We report three studies asking speakers and listeners about their preferences for lower and upper limit statements, in the domains of distances, durations, and prices. It appears that travellers prefer information about maximum distances and maximum durations, and buyers (but not sellers) prefer to be told about maximum prices and maximum delivery times. Mistaken maxima are at the same time regarded as more “wrong” than mistaken minima. However, this preference for “worst case” information is not necessarily shared by providers of information (advisors), who are also concerned about being blamed if wrong.  相似文献   

16.
The constant-ratio rule (CRR) and four interpretations of R. D. Luce's (In R. D. Luce, R. R. Bush, & E. Galanter (Eds.), Handbook of mathematical psychology (Vol. 1). New York: Wiley, 1963) similarity choice model (SCM) were tested using an alphabetic confusion paradigm. Four stimulus conditions were employed that varied in set size (three, four or five stimulus elements) and set constituency (block letters: A, E, X; F, H, X; A, E, F, H; A, E, F, H, X), and were presented to each subject in independent blocks. The four interpretations of the SCM were generated by constraining one, both, or neither of its similarity and bias parameter sets to be invariant in across-stimulus set model predictions. The strictest interpretation of the SCM (both the similarity and bias parameters constrained), shown to be a special case of the CRR, and the CRR produced nearly equivalent across-set predictions that provided a reasonable first approximation to the data. However, they proved inferior to the least strict SCM (neither the similarity nor bias parameters were constrained; the common interpretation of the SCM in visual confusion). Additionally, the least strict SCM was compared to J. T. Townsend's (Perception and Psychophysics, 1971, 9, 40–50, 449–454) overlap model, the all-or-none model (J. T. Townsend, Journal of Mathematical Psychology, 1978, 18, 25–38), and a modified version of L. H. Nakatani's (Journal of Mathematical Psychology, 1972, 9, 104–127) confusion-choice model. Both the least strict SCM and confusion-choice models produced nearly equivalent within stimulus set predictions that were superior to the overlap and all-or-none within-set predictions. Measurement conditions related to model structure and equivalence relations among the models, many of them new, were examined and compared with the statistical fit results of the investigation.  相似文献   

17.
This study assessed the ability of Pascual-Leone's Theory of Constructive Operators to predict the minimum age or maturational level at which integration of a motor task could be achieved. Children 5–12 years of age (n=114) performed a discrete motor task requiring a constrained circular movement to be integrated with an unconstrained linear movement to a target. The Theory of Constructive Operators and the principles of constructive cognition were used to generate a model of task performance. Based on the model and in accordance with the theory, it was predicted that 5- to 6-year-old subjects would lack the cognitive capacity (M-capacity) to efficiently integrate this task. An analysis of covariance for age was performed on task parameters reflecting integration (and highest M-demand) with movement speed as the covariate. Scheffé contrasts supported the prediction as 5- to 6-year-old subjects were inferior to each of the other age groups (p < .05). Furthermore, no significant differences were found to exist between any of the older age groups.  相似文献   

18.
In a constrained finger-tapping task, in which a subject attempts to match the rate of tapping responses to the rate of a pacer stimulus, interresponse interval (IRI) was a nonlinear function of interstimulus interval (ISI), in agreement with the results of Collyer, Broadbent, and Church (1992). In an unconstrained task, the subjects were not given an ISI to match, but were instructed to tap at their preferred rate, one that seemed not too fast or too slow for comfortable production. The distribution of preferred IRIs was bimodal rather than unimodal, with modes at 272 and 450 msec. Preferred IRIs also tended to become shorter over successive sessions. Time intervals that were preferred in the unconstrained task tended to be intervals that were overproduced (IRI > ISI) when they were used as ISIs in the constrained task. A multiple-oscillator model of timing developed by Church and Broadbent (1990) was used to simulate the two tasks. The nonlinearity in constrained tapping, termed theoscillator signature, and the bimodal distribution in unconstrained tapping were both exhibited by the model. The nature of the experimental results and the success of the simulation in capturing them both provide further support for a multiple-oscillator view of timing.  相似文献   

19.
A model containing linear and nonlinear parameters (e. g., a spatial multidimensional scaling model) is viewed as a linear model with free and constrained parameters. Since the rank deficiency of the design matrix for the linear model determines the number of side conditions needed to identify its parameters, the design matrix acts as a guide in identifying the parameters of the nonlinear model. Moreover, if the design matrix and the uniqueness conditions constitute anorthogonal linear model, then the associated error sum of squares may be expressed in a form which separates the free and constrained parameters. This immediately provides least squares estimates of the free parameters, while simplifying the least squares problem for those which are constrained. When the least squares estimates for a nonlinear model are obtained in this way,i.e. by conceptualizing it as a submodel, the final error sum of squares for the nonlinear model will be arestricted minimum whenever the side conditions of the model become real restrictions upon its submodel. In this case the design matrix for the embracing orthogonal model serves as a guide in introducing parameters into the nonlinear model as well as in identifying these parameters. The method of overwriting a nonlinear model with an orthogonal linear model is illustrated with two different spatial analyses of a three-way preference table.  相似文献   

20.
If X loves Y does it follow that X has reasons to love a physiologically exact replacement for Y? Can love's reasons be duplicated? One response to the problem is to suggest that X lacks reasons for loving such a duplicate because the reason-conferring properties of Y cannot be fully duplicated. But a concern, played upon by Derek Parfit, is that this response may result from a failure to take account of the psychological pressures of an actual duplication scenario. In the face of the actual loss of a loved one and the subsequent appearance of a duplicate, how could we resist the inclination to love? Drawing upon duplication scenarios from Parfit and from Stanislaw Lem's Solaris, this paper will argue that there could be reasons for X to come to love a duplicate of Y but that these would not be identical with the reasons that X had (and may still have) to love Y. Nor (in the case of an agent with a normal causal history) could they be reasons for a love that violates the requirement that love is a response to a relationship and therefore takes time to emerge.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号