首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
N‐of‐1 study designs involve the collection and analysis of repeated measures data from an individual not using an intervention and using an intervention. This study explores the use of semi‐parametric and parametric bootstrap tests in the analysis of N‐of‐1 studies under a single time series framework in the presence of autocorrelation. When the Type I error rates of bootstrap tests are compared to Wald tests, our results show that the bootstrap tests have more desirable properties. We compare the results for normally distributed errors with those for contaminated normally distributed errors and find that, except when there is relatively large autocorrelation, there is little difference between the power of the parametric and semi‐parametric bootstrap tests. We also experiment with two intervention designs: ABAB and AB, and show the ABAB design has more power. The results provide guidelines for designing N‐of‐1 studies, in the sense of how many observations and how many intervention changes are needed to achieve a certain level of power and which test should be performed.  相似文献   

2.
Wedell DH  Moro R 《Cognition》2008,107(1):105-136
Two experiments used within-subject designs to examine how conjunction errors depend on the use of (1) choice versus estimation tasks, (2) probability versus frequency language, and (3) conjunctions of two likely events versus conjunctions of likely and unlikely events. All problems included a three-option format verified to minimize misinterpretation of the base event. In both experiments, conjunction errors were reduced when likely events were conjoined. Conjunction errors were also reduced for estimations compared with choices, with this reduction greater for likely conjuncts, an interaction effect. Shifting conceptual focus from probabilities to frequencies did not affect conjunction error rates. Analyses of numerical estimates for a subset of the problems provided support for the use of three general models by participants for generating estimates. Strikingly, the order in which the two tasks were carried out did not affect the pattern of results, supporting the idea that the mode of responding strongly determines the mode of thinking about conjunctions and hence the occurrence of the conjunction fallacy. These findings were evaluated in terms of implications for rationality of human judgment and reasoning.  相似文献   

3.
The crux in psychometrics is how to estimate the probability that a respondent answers an item correctly on one occasion out of many. Under the current testing paradigm this probability is estimated using all kinds of statistical techniques and mathematical modeling. Multiple evaluation is a new testing paradigm using the person's own personal estimates of these probabilities as data. It is compared to multiple choice, which appears to be a degenerated form of multiple evaluation. Multiple evaluation has much less measurement error than multiple choice, and this measurement error is not in favor of the examinee. When the test is used for selection purposes as it is with multiple choice, the probability of a Type II error (unjustified passes) is almost negligible. Procedures for statistical item-and-test analyses under the multiple evaluation paradigm are presented. These procedures provide more accurate information in comparison to what is possible under the multiple choice paradigm. A computer program that implements multiple evaluation is also discussed.  相似文献   

4.
Low numerical probabilities tend to be directionally ambiguous, meaning they can be interpreted either positively, suggesting the occurrence of the target event, or negatively, suggesting its non-occurrence. High numerical probabilities, however, are typically interpreted positively. We argue that the greater directional ambiguity of low numerical probabilities may make them more susceptible than high probabilities to contextual influences. Results from five experiments supported this premise, with perceived base rate affecting the interpretation of an event’s numerical posterior probability more when it was low than high. The effect is consistent with a confirmatory hypothesis testing process, with the relevant perceived base rate suggesting the directional hypothesis which people then test in a confirmatory manner.  相似文献   

5.
When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely cumbersome when it comes to learning. Finance academics and professionals, however, prefer to value risky prospects in terms of a trade-off between expected reward and risk, where the latter is usually measured in terms of reward variance. This mean-variance approach is fast and simple and greatly facilitates learning, but it impedes assigning values to new gambles on the basis of those of known ones. To date, it is unclear whether the human brain computes values in accordance with expected utility theory or with mean-variance analysis. In this article, we discuss the theoretical and empirical arguments that favor one or the other theory. We also propose a new experimental paradigm that could determine whether the human brain follows the expected utility or the mean-variance approach. Behavioral results of implementation of the paradigm are discussed.  相似文献   

6.
Although many common uses of p-values for making statistical inferences in contemporary scientific research have been shown to be invalid, no one, to our knowledge, has adequately assessed the main original justification for their use, which is that they can help to control the Type I error rate (Neyman & Pearson, 1928, 1933). We address this issue head-on by asking a specific question: Across what domain, specifically, do we wish to control the Type I error rate? For example, do we wish to control it across all of science, across all of a specific discipline such as psychology, across a researcher's active lifetime, across a substantive research area, across an experiment, or across a set of hypotheses? In attempting to answer these questions, we show that each one leads to troubling dilemmas wherein controlling the Type I error rate turns out to be inconsistent with other scientific desiderata. This inconsistency implies that we must make a choice. In our view, the other scientific desiderata are much more valuable than controlling the Type I error rate and so it is the latter, rather than the former, with which we must dispense. But by doing so—that is, by eliminating the Type I error justification for computing and using p-values—there is even less reason to believe that p is useful for validly rejecting null hypotheses than previous critics have suggested.  相似文献   

7.
The constructive nature of memory is generally adaptive, allowing us to efficiently store, process and learn from life events, and simulate future scenarios to prepare ourselves for what may come. However, the cost of a flexibly constructive memory system is the occasional conjunction error, whereby the components of an event are authentic, but the combination of those components is false. Using a novel recombination paradigm, it was demonstrated that details from one autobiographical memory (AM) may be incorrectly incorporated into another, forming AM conjunction errors that elude typical reality monitoring checks. The factors that contribute to the creation of these conjunction errors were examined across two experiments. Conjunction errors were more likely to occur when the corresponding details were partially rather than fully recombined, likely due to increased plausibility and ease of simulation of partially recombined scenarios. Brief periods of imagination increased conjunction error rates, in line with the imagination inflation effect. Subjective ratings suggest that this inflation is due to similarity of phenomenological experience between conjunction and authentic memories, consistent with a source monitoring perspective. Moreover, objective scoring of memory content indicates that increased perceptual detail may be particularly important for the formation of AM conjunction errors.  相似文献   

8.
Abstract

Pituitary-adrenal axis was studied in terms of Type A behaviour, hostility and vital exhaustion among 69 healthy middle-aged men. The results showed that psychological factors could explain a significant proportion of the biologically manipulated responses of HPA axis, but they worked in different ways. Type A behaviour was related to a high level of mean basal ACTH and a low level of cortisol response to ACTH stimulation after dexamethasone suppression; hostility was related to a high level of mean basal cortisol and a high cortisol in cortisol/ACTH ratio, while vital exhaustion was characterized by a low level of mean basal ACTH and a decreased ACTH in relation to cortisol. The adrenocortical patterns, i.e. a high ACTH-low cortisol; a high cortisol; and a low ACTH-low mean basal cortisol, as related to Type A behaviour, hostility and exhaustion, respectively, are in line with the traditional physiological stress model and suggest that different adrenocortical responses might be able to identify different mental stress processes. Sense of control has been suggested to be a key concept for psychological understanding of this finding.  相似文献   

9.
In typical discrimination experiments, participants are presented with a constant standard and a variable comparison stimulus and their task is to judge which of these two stimuli is larger (comparative judgement). In these experiments, discrimination sensitivity depends on the temporal order of these stimuli (Type B effect) and is usually higher when the standard precedes rather than follows the comparison. Here, we outline how two models of stimulus discrimination can account for the Type B effect, namely the weighted difference model (or basic Sensation Weighting model) and the Internal Reference Model. For both models, the predicted psychometric functions for comparative judgements as well as for equality judgements, in which participants indicate whether they perceived the two stimuli to be equal or not equal, are derived and it is shown that the models also predict a Type B effect for equality judgements. In the empirical part, the models' predictions are evaluated. To this end, participants performed a duration discrimination task with comparative judgements and with equality judgements. In line with the models' predictions, a Type B effect was observed for both judgement types. In addition, a time-order error, as indicated by shifts of the psychometric functions, and differences in response times were observed only for the equality judgement. Since both models entail distinct additional predictions, it seems worthwhile for future research to unite the two models into one conceptual framework.  相似文献   

10.
陈增祥  何云  李枭  王琳 《心理学报》2022,54(9):1106-1121
文章通过5个实验(包括1个预注册实验)探讨了个体感知到的相对社会地位如何影响消费者对产品繁简设计的偏好。实验1和2发现处于相对低社会地位的个体会偏好设计繁复的产品。实验3和4探究了该效应的中介机制, 即繁复设计的产品可以传递出努力线索, 而相对低社会地位个体因为重视努力进而偏好繁复设计产品。实验5通过调节变量的方式进一步验证上述机制, 发现社会地位对繁简偏好的影响只存在于那些重视努力价值的个体身上。文章推进了消费者审美偏好, 主观社会地位和消费者努力等方向的研究进展。  相似文献   

11.
A Monte Carlo simulation was conducted to compare five, pairwise multiple comparison procedures. The number of means varied from 4 to 6 and the sample size ratio varied from 1 to 60. Procedures were evaluated on the basis of Type I errors, any‐pair power and all‐pairs power. Four procedures were shown to be conservative, while the fifth provided adequate control of Type I errors only for restricted values of sample size ratios. No procedure was found to be uniformly most powerful. The Tukey‐Kramer procedure was found to provide the best any‐pair power provided it is applied without requiring a significant overall F test. In most cases, the Hayter‐Fisher modification of the Tukey‐Kramer was found to provide very good any‐pair power and to be uniformly more powerful than the Tukey‐Kramer when a significant overall F test is required. A partition‐based version of Peritz's method usually provided the greatest all‐pairs power. A modification of the Shaffer‐Welsch was found to be useful in certain conditions.  相似文献   

12.
Abstract

Waveform data resulting from time-intensive longitudinal designs require careful treatment. In particular, the statistical properties of summary metrics in this area are crucial. We draw on event-related potential (ERP) studies, a field with a relatively long history of collecting and analyzing such data, to illustrate our points. In particular, three summary measures for a component in the average ERP waveform feature prominently in the literature: the maximum (or peak amplitude), the average (or mean amplitude) and a combination (or adaptive mean). We discuss the methodological divide associated with these summary measures. Through both analytic work and simulation study, we explore the properties (e.g., Type I and Type II errors) of these competing metrics for assessing the amplitude of an ERP component across experimental conditions. The theoretical and simulation-based arguments in this article illustrate how design (e.g., number of trials per condition) and analytic (e.g., window location) choices affect the behavior of these amplitude summary measures in statistical tests and highlight the need for transparency in reporting the analytic steps taken. There is an increased need for analytic tools for waveform data. As new analytic methods are developed to address these time-intensive longitudinal data, careful treatment of the statistical properties of summary metrics used for null hypothesis testing is crucial.  相似文献   

13.
中介效应检验程序及其应用   总被引:273,自引:10,他引:273  
讨论了中介变量以及相关概念、中介效应的估计;比较了检验中介效应的主要方法;提出了一个检验程序,它包含了依次检验和Sobel检验。该程序检验的第一类和第二类错误率之和通常比单一检验方法小,既可以做部分中介检验,也可以做完全中介检验。作为示范例子,引入中介变量研究学生行为对同伴关系的影响。  相似文献   

14.
In this paper, we describe a general purpose data simulator, Datasim, which is useful for anyone conducting computer-based laboratory assignments in statistics. Simulations illustrating sampling distributions, the central limit theorem, Type I and Type II decision errors, the power of a test, the effects of violating assumptions, and the distinction between orthogonal and non-orthogonal contrasts are discussed. Simulations illustrating other statistical concepts—partial correlation, regression to the mean, heteroscedasticity, the partitioning of error terms in splitplot designs, and so on—can be developed easily. Simulations can be assigned as laboratory exercises, or the instructor can execute the simulations during class, integrate the results into an ongoing lecture, and use the results to initiate class discussion of the relevant statistical concepts.  相似文献   

15.
This paper presents an experimental investigation into how individuals make decisions under uncertainty when faced with different payout structures in the context of gambling. Type 2 signal detection theory was utilized to compare sensitivity to bias manipulations between regular nonproblem gamblers and nongamblers in a novel probability-based gambling task. The results indicated that both regular gamblers and nongamblers responded to the changes of rewards for correct responses (Experiment 1) and penalties for errors (Experiment 2) in setting their gambling criteria, but that regular gamblers were more sensitive to these manipulations of bias. Regular gamblers also set gambling criteria that were more optimal. The results are discussed in terms of an expertise-transference hypothesis.  相似文献   

16.
The statistical significance levels of the Wilcoxon-Mann-Whitney test and the Kruskal-Wallis test are substantially biased by heterogeneous variances of treatment groups—even when sample sizes are equal. Under these conditions, the Type I error probabilities of the nonparametric tests, performed at the .01, .05, and .10 significance levels, increase by as much as 40%-50% in many cases and sometimes as much as 300%. The bias increases systematically as the ratio of standard deviations of treatment groups increases and remains fairly constant for various sample sizes. There is no indication that Type I error probabilities approach the significance level asymptotically as sample size increases.  相似文献   

17.
This study investigated the "knew it all along" explanation of the hypercorrection effect. The hypercorrection effect refers to the finding that when people are given corrective feedback, errors that are committed with high confidence are easier to correct than low-confidence errors. Experiment 1 showed that people were more likely to claim that they knew it all along when they were given the answers to high-confidence errors as compared with low-confidence errors. Experiments 2 and 3 investigated whether people really did know the correct answers before being told or whether the claim in Experiment 1 was mere hindsight bias. Experiment 2 showed that (a) participants were more likely to choose the correct answer in a 2nd guess multiple-choice test when they had expressed an error with high rather than low confidence and (b) that they were more likely to generate the correct answers to high-confidence as compared with low-confidence errors after being told they were wrong and to try again. Experiment 3 showed that (c) people were more likely to produce the correct answer when given a 2-letter cue to high- rather than low-confidence errors and that (d) when feedback was scaffolded by presenting the target letters 1 by 1, people needed fewer such letter prompts to reach the correct answers when they had committed high- rather than low-confidence errors. These results converge on the conclusion that when people said that they knew it all along, they were right. This knowledge, no doubt, contributes to why they are able to correct those high-confidence errors so easily.  相似文献   

18.
Verbal phrases denoting uncertainty are of two kinds: positive, suggesting the occurrence of a target outcome, and negative, drawing attention to its nonoccurrence (Teigen & Brun, 1995). This directionality is correlated with, but not identical to, high and low p values. Choice of phrase will in turn influence predictions and decisions. A treatment described as having “some possibility” of success will be recommended, as opposed to when it is described as “quite uncertain,” even if the probability of cure referred to by these two expressions is judged to be the same (Experiment 1). Individuals who formulate their chances of achieving a successful outcome in positive terms are supposed to make different decisions than individuals who use equivalent, but negatively formulated, phrases (Experiments 2 and 3). Finally, negative phrases lead to fewer conjunction errors in probabilistic reasoning than do positive phrases (Experiment 4). For instance, a combination of 2 “uncertain” outcomes is readily seen to be “very uncertain.” But positive phrases lead to fewer disjunction errors than do negative phrases. Thus verbal probabilistic phrases differ from numerical probabilities not primarily by being more “vague,” but by suggesting more clearly the kind of inferences that should be drawn.  相似文献   

19.
基于说服模型,本研究探讨了建言类型、上下级关系、管理者感知忠诚对管理者建言采纳的影响。通过两个管理者样本的实验数据,本研究发现:(1)管理者更易采纳促进性建言而非抑制性建言;(2)在上下级关系不好的情况下,建言类型对建言采纳的影响显著,在上下级关系较好的情况下,建言类型对建言采纳的影响不显著;(3)上下级关系是通过管理者感知忠诚调节建言类型对建言采纳的影响。  相似文献   

20.
C Rode  L Cosmides  W Hell  J Tooby 《Cognition》1999,72(3):269-304
When given a choice between two otherwise equivalent options - one in which the probability information is stated and another in which it is missing - most people avoid the option with missing probability information (Camerer & Weber, 1992). This robust, frequently replicated tendency is known as the ambiguity effect. It is unclear, however, why the ambiguity effect occurs. Experiments 1 and 2, which separated effects of the comparison process from those related to missing probability information, demonstrate that the ambiguity effect is elicited by missing probabilities rather than by comparison of options. Experiments 3 and 4 test predictions drawn from the literature on behavioral ecology. It is suggested that choices between two options should reflect three parameters: (1) the need of the organism, (2) the mean expected outcome of each option; and (3) the variance associated with each option's outcome. It is hypothesized that unknown probabilities are avoided because they co-occur with high outcome variability. In Experiment 3 it was found that subjects systematically avoid options with high outcome variability regardless of whether probabilities are explicitly stated or not. In Experiment 4, we reversed the ambiguity effect: when participants' need was greater than the known option's expected mean outcome, subjects preferred the ambiguous (high variance) option. From these experiments we conclude that people do not generally avoid ambiguous options. Instead, they take into account expected outcome, outcome variability, and their need in order to arrive at a decision that is most likely to satisfy this need.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号