首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
While effect size estimates, post hoc power estimates, and a priori sample size determination are becoming a routine part of univariate analyses involving measured variables (e.g., ANOVA), such measures and methods have not been articulated for analyses involving latent means. The current article presents standardized effect size measures for latent mean differences inferred from both structured means modeling and MIMIC approaches to hypothesis testing about differences among means on a single latent construct. These measures are then related to post hoc power analysis, a priori sample size determination, and a relevant measure of construct reliability.I wish to convey my appreciation to the reviewers and Associate Editor, whose suggestions extended and strengthened the article's content immensely, and to Ralph Mueller of The George Washington University for enhancing the clarity of its presentation.  相似文献   

2.
GPOWER: A general power analysis program   总被引:1,自引:0,他引:1  
GPOWER is a completely interactive, menu-driven program for IBM-compatible and Apple Macintosh personal computers. It performs high-precision statistical power analyses for the most common statistical tests in behavioral research, that is,t tests,F tests, andχ 2 tests. GPOWER computes (1) power values for given sample sizes, effect sizes andα levels (post hoc power analyses); (2) sample sizes for given effect sizes,α levels, and power values (a priori power analyses); and (3)α andβ values for given sample sizes, effect sizes, andβ/α ratios (compromise power analyses). The program may be used to display graphically the relation between any two of the relevant variables, and it offers the opportunity to compute the effect size measures from basic parameters defining the alternative hypothesis. This article delineates reasons for the development of GPOWER and describes the program’s capabilities and handling.  相似文献   

3.
The study examines thirty-six homogeneous peer support groups that failed to positively benefit most of the participants. Outcomes were assessed pre and one-year post using measures of mental health, marital relationships, and motherhood role indices. Four central group process characteristics previously found to be common in successful peer support groups were used as a framework for developing a series of post hoc hypotheses. The groups were found to be low on cohesiveness defined behaviorally, saliency, cognitive structures for reframing common dilemmas, and limited in the range of therapeutic experiences.  相似文献   

4.

Study preregistration promotes transparency in scientific research by making a clear distinction between a priori and post hoc procedures or analyses. Management and applied psychology have not embraced preregistration in the way other closely related social science fields have. There may be concerns that preregistration does not add value and prevents exploratory data analyses. Using a mixed-method approach, in Study 1, we compared published preregistered samples against published non-preregistered samples. We found that preregistration effectively facilitated more transparent reporting based on criteria (i.e., confirmed hypotheses and a priori analysis plans). Moreover, consistent with concerns that the published literature contains elevated type I error rates, preregistered samples had fewer statistically significant results (48%) than non-preregistered samples (66%). To learn about the perceived advantages, disadvantages, and misconceptions of study preregistration, in Study 2, we surveyed authors of preregistered studies and authors who had never preregistered a study. Participants in both samples had positive inclinations towards preregistration yet expressed concerns about the process. We conclude with a review of best practices for management and applied psychology stakeholders.

  相似文献   

5.
J Haidt 《Psychological review》2001,108(4):814-834
Research on moral judgment has been dominated by rationalist models, in which moral judgment is thought to be caused by moral reasoning. The author gives 4 reasons for considering the hypothesis that moral reasoning does not cause moral judgment; rather, moral reasoning is usually a post hoc construction, generated after a judgment has been reached. The social intuitionist model is presented as an alternative to rationalist models. The model is a social model in that it deemphasizes the private reasoning done by individuals and emphasizes instead the importance of social and cultural influences. The model is an intuitionist model in that it states that moral judgment is generally the result of quick, automatic evaluations (intuitions). The model is more consistent that rationalist models with recent findings in social, cultural, evolutionary, and biological psychology, as well as in anthropology and primatology.  相似文献   

6.
This paper traces a progression of four computer-based methods for studying and fostering both the structure and the on-line development of knowledge. Each empirical technique employs ECHO, a connectionist model that instantiates the theory of explanatory coherence (TEC). First, verbal protocols of subjects’ reasonings were modeled post hoc. Next, ECHO predicted, a priori, subjects’ text-based believability ratings. Later, the bifurcation/bootstrapping method was developed to elicit and account for individuals’ background knowledge, while assessing intercoder reliability regarding ECHO simulations. Finally,Convince Me, our “reasoner’s workbench,” automated the explication both of subjects’ knowledge bases and of their belief assessments; theConvince Me software permits contrasts between the model’s predictions and subjects’ proposition-wise evaluations. These experimental systems enhance our understanding of the relationships among—and determinant features regarding—hypotheses, evidence, and the arguments that incorporate them.  相似文献   

7.
《Ecological Psychology》2013,25(3):199-236
Properly formed and properly used evolutionary hypotheses invalidate most common criticisms and must be judged, like other hypotheses in science, through their ability to be theoretically and empirically progressive. Well-formed hypotheses incorporate established evolutionary theory with evidence of actual historical conditions. A complex, multifaceted hypothesis can predict patterns of phenotypic variation that would make sense if the hypothesis is true, but be unlikely if it is false. This approach to evolutionarily guided research is illustrated via the sexual-dinichism hypothesis, which proposes historical niche divergence between the sexes during which females could still make adaptive use of trees, but larger males could not. This has led to the investigation of several traits in modern children, where evidence for predicted sex differences in behavior were discovered. We find the objections of critics (e.g., Burton, this issue) to be appropriate for criticizing post hoc evolutionary explanations and perhaps poorly developed hypotheses, but not well-formed and properly used theories. We hold that evolutionary theories are an essential part of psychology in general and ecological psychology in particular.  相似文献   

8.
We examined how decision makers generate and evaluate hypotheses when data are presented sequentially. In the first 2 experiments, participants learned the relationship between data and possible causes of the data in a virtual environment. Data were then presented iteratively, and participants either generated hypotheses they thought caused the data or rated the probability of possible causes of the data. In a 3rd experiment, participants generated hypotheses and made probability judgments on the basis of previously stored general knowledge. Findings suggest that both the hypotheses one generates and the judged probability of those hypotheses are heavily influenced by the most recent evidence observed and by the diagnosticity of the evidence. Specifically, participants generated a narrow set of possible explanations when the presented evidence was diagnostic compared with when it was nondiagnostic, suggesting that nondiagnostic evidence entices participants to cast a wider net when generating hypotheses.  相似文献   

9.
We showed that metacomprehension accuracy improved when participants (N = 87 college students) wrote summaries of texts prior to judging their comprehension; however, accuracy only improved when summaries were written after a delay, not when written immediately after reading. We evaluated two hypotheses proposed to account for this delayed-summarization effect (the accessibility hypothesis and the situation model hypothesis). The data suggest that participants based metacomprehension judgments more on the gist of texts when they generated summaries after a delay; whereas, they based judgments more on details when they generated summaries immediately after reading. Focusing on information relevant to the situation model of a text (the gist of a text) produced higher levels of metacomprehension accuracy, which is consistent with situation model hypothesis.  相似文献   

10.
Holtzman  Geoffrey S. 《Philosophia》2019,47(2):435-458
Philosophia - I argue that free will is a nominal construct developed and deployed post hoc in an effort to provide cohesive narratives in support of a priori moral-judgmental dispositions. In a...  相似文献   

11.
People routinely focus on one hypothesis and avoid consideration of alternative hypotheses on problems requiring decisions between possible states of the world--for example, on the “pseudodiagnosticity” task (Doherty, Mynatt, Tweney, & Schiavo, 1979). In order to account for behaviour on such “inference” problems, it is proposed that people can hold in working memory, and operate upon, but one alternative at a time, and that they have a bias to test the hypothesis they think true. In addition to being an ex post facto explanation of data selection in inference tasks, this conceptualization predicts that there are situations in which people will consider alternatives. These are:

1. “action” problems, where the alternatives are possible courses of action;

2. “inference” problems, in which evidence favours an alternative hypothesis.

Experiment 1 tested the first prediction. Subjects were given action or inference problems, each with two alternatives and two items of data relevant to each alternative. They received probabilistic information about the relation between one datum and one alternative and picked one value from among the other three possible pairs of such relations. Two findings emerged; (1) a strong tendency to select information about only one alternative with inferences; and (2) a strong tendency, compared to inferences, to select information about both alternatives with actions.

Experiment 2 tested the second prediction. It was predicted that data suggesting that one alternative was incorrect would lead many subjects to consider, and select information about, the other alternative. For actions, it was predicted that this manipulation would have no effect. Again the data were as predicted.  相似文献   

12.
A novel event-based conceptual implicit memory test was designed to tap the development of new associations between objects and ad hoc categories. At study, participants were presented with a plausible story that linked an incongruous object (computer) with an ad hoc category (restaurant). At test, participants judged whether a given object was typically found in a restaurant. In Experiment 1, judgment time was significantly slower for the incongruous object (computer) when the story had previously linked the computer to the restaurant, relative to when it had not. Experiment 2 replicated this effect and ruled out the alternative interpretation that this interference effect was attributable to a general slowing of responses to all studied items. Unlike in prior studies, this demonstration of associative priming cannot be attributed to perceptual priming or to test awareness in memory-intact participants. The paradigm therefore offers a unique opportunity to study single-trial conceptual learning in memory-intact and memory-impaired populations.  相似文献   

13.
Null hypothesis significance testing is criticised for emphatically focusing on using the appropriate statistic for the data and an overwhelming concern with low p-values. Here, we present a new technique, Observation Oriented Modeling (OOM), as an alternative to traditional techniques in the social sciences. Ten experiments on judgements of associative memory (JAM) were analysed with OOM to show data analysis procedures and the consistency of JAM results across several types of experimental manipulations. In a typical JAM task, participants are asked to rate the frequency of word pairings, such as LOST-FOUND, and are then compared to actual normed associative frequencies to measure how accurately participants can judge word pairs. Three types of JAM tasks are outlined (traditional, paired, and instructional manipulations) to demonstrate how modelling complex hypotheses can be applied through OOM to this type of data that would be conventionally analysed with null hypothesis significance testing.  相似文献   

14.
Coupled people, those in a relationship, devaluate the attractiveness of an alternative partner compared to noncoupled people (D. J. Johnson & C. E. Rusbult, 1989). The present research tested two competing hypotheses about the mechanisms underlying this phenomenon. According to the motivational hypothesis, coupled and noncoupled people initially perceive opposite‐sex others as equally attractive. Coupled people, however, recalibrate their perceptions. In contrast, the perceptual hypothesis proposes that coupled people do not perceive opposite‐sex others as attractive. The present study tested these competing hypotheses by measuring both involuntary and self‐reported perceptions of attractiveness of opposite‐sex models. Supporting the motivational hypothesis, coupled participants (n = 38) and noncoupled participants (n = 34) exhibited the same degree of pupil dilation, however, coupled participants reported lower attractiveness ratings.  相似文献   

15.
In typical statistical learning studies, researchers define sequences in terms of the probability of the next item in the sequence given the current item (or items), and they show that high probability sequences are treated as more familiar than low probability sequences. Existing accounts of these phenomena all assume that participants represent statistical regularities more or less as they are defined by the experimenters—as sequential probabilities of symbols in a string. Here we offer an alternative, or possibly supplementary, hypothesis. Specifically, rather than identifying or labeling individual stimuli discretely in order to predict the next item in a sequence, we need only assume that the participant is able to represent the stimuli as evincing particular similarity relations to one another, with sequences represented as trajectories through this similarity space. We present experiments in which this hypothesis makes sharply different predictions from hypotheses based on the assumption that sequences are learned over discrete, labeled stimuli. We also present a series of simulation models that encode stimuli as positions in a continuous two‐dimensional space, and predict the next location from the current location. Although no model captures all of the data presented here, the results of three critical experiments are more consistent with the view that participants represent trajectories through similarity space rather than sequences of discrete labels under particular conditions.  相似文献   

16.
An experiment is reported comparing the effectiveness of auditory and visual stimuli in eliciting the tip-of-the-tongue phenomenon. 30 participants were asked to name the titles of 27 television shows. Half of the participants were given segments of the theme song for each show (auditory cue), and half were shown the cast photographs for each show (visual cue). Participants were asked to report whenever they experienced the tip-of-the-tongue state. There were no significant differences between the auditory and visual stimuli in terms of the incidence rate for the tip-of-the-tongue state, the amount of partial information that participants provided in their responses, or the frequency of interlopers (alternative responses that persistently come to mind). These findings suggest that the characteristics of the tip-of-the-tongue state are determined more by the nature of the response set than by the type of stimuli used as cues. The results are inconsistent with inferential theories of the tip-of-the-tongue phenomenon, such as the cue familiarity hypothesis and, instead, tend to support direct-access hypotheses.  相似文献   

17.
Philosophers have often noted that science displays an uncommon degree of consensus on beliefs among its practitioners. Yet consensus in the sciences is not a goal in itself. I consider cases of consensus on beliefs as concrete events. Consensus on beliefs is neither a sufficient nor a necessary condition for presuming that these beliefs constitute knowledge. A concrete consensus on a set of beliefs by a group of people at a given historical period may be explained by different factors according to various hypotheses. A particularly interesting hypothesis from an epistemic perspective is the knowledge hypothesis: shared knowledge explains a consensus on beliefs. If all the alternative hypotheses to the knowledge hypotheses are false or are not as good in explaining a concrete consensus on beliefs, the knowledge hypothesis is the best explanation of the consensus. If the knowledge hypothesis is best, a consensus becomes a plausible, though fallible, indicator of knowledge. I argue that if a consensus on beliefs is uncoerced, uniquely heterogeneous and large, the gap between the likelihood of the consensus given the knowledge hypothesis and its likelihoods given competing hypotheses tends to increase significantly. Consensus is a better indicator of knowledge than “success” or “human flourishing”.  相似文献   

18.
In the 2-4-6 rule discovery task, reasoners seek to discover a rule that governs the arrangement of three numbers (or triple). The to-be-discovered rule is “ascending numbers”. Upon being given the triple 2-4-6 as an initial example, however, reasoners tend to formulate algebraically specific hypotheses. Traditionally, this task is conducted primarily from an internal representation of the triples and candidate hypotheses. More recently, substantial representational effects have been demonstrated wherein an external representation of the dimensions of the problem space facilitated successful rule discovery. In the two experiments reported here, an interactive graphical representation was created by concurrently plotting each triple produced by the participants. In Experiment 1, participants who performed the task with this external representation were more likely to discover the rule than were a group of control participants. Experiment 2 replicated the effect but also assessed participants' hypotheses for each triple generated. Results indicated that a graphical representation of the triples fostered the development of hypotheses that were less constrained by the implied algebraic specificity of the initial triple.  相似文献   

19.
James Blachowicz 《Synthese》1987,71(3):235-321
In recent years, there have been some attempts to defend the legitimacy of a non-inductive generative logic of discovery whose strategy is to analyze a variety of constraints on the actual generation of explanatory hypotheses. These proposed new theories, however, are only weakly generative (relying on sophisticated processes of elimination) rather than strongly generative (embodying processes of correction).This paper develops a strongly generative theory which holds that we can come to know something new only as a variant of what we already know — and that the novelty of this variant is not thereby eliminated nor beyond our powers of characterization, a double requirement that is vital for resolving the Meno paradox. In this light, the discovery of a new hypothesis is taken as the correction of an antecedent hypothesis in response to the discrepancies between the predictions generated by that antecedent hypothesis and the desired result (e.g. the actual data to be explained). This process comprises two parallel operations: the first, which demonstrates the positive role of the facts in generating new explanations, involves a mapping between multiple hypotheses and sets of predictions generated from those hypotheses, for the purpose of taking the actual data as a determinable variant of neighboring sets of predictions. This mapping permits the facts to indicate how corrective adjustments in the working hypothesis should be made; the second operation, which demonstrates the positive role of explanations in generating new facts, involves a mapping between differently construed versions of the actual data and the conceptualizations derived from those perceptual versions, for the purpose of taking the working hypothesis as a determinable variant of these neighboring conceptualizations. This mapping permits a given hypothesis to generate predictions increasingly closer to the actual facts.The proposed theory provides the basis for a reformed conception of justification. Because hypotheses are meaningful only as variants of neighboring hypotheses, and because such variation is corrective, their justification in the reformed sense will incorporate not only their justification in the traditional sense, but their generation as well.  相似文献   

20.
When faced with two competing hypotheses, people sometimes prefer to look at multiple sources of information in support of one hypothesis rather than to establish the diagnostic value of a single piece of information for the two hypotheses. This is termed pseudodiagnostic reasoning and has often been understood to reflect, among other things, poor information search strategies. Past research suggests that diagnostic reasoning may be more easily fostered when participants seek data to help in the selection of one of two competing courses of action as opposed to situations where they seek data to help infer which of two competing hypotheses is true. In the experiment reported here, we provide the first empirical evidence demonstrating that manipulating the relevance of the feature for which participants initially receive information determines whether they will make a nominally diagnostic or pseudodiagnostic selection. The discussion of these findings focuses on implications for the ability to engage in diagnostic hypothesis testing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号