首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
The Plackett-Luce model (PL) for ranked data assumes the forward order of the ranking process. This hypothesis postulates that the ranking process of the items is carried out by sequentially assigning the positions from the top (most liked) to the bottom (least liked) alternative. This assumption has been recently relaxed with the Extended Plackett-Luce model (EPL) through the introduction of the discrete reference order parameter, describing the rank attribution path. By starting from two formal properties of the EPL, the former related to the inverse ordering of the item probabilities at the first and last stage of the ranking process and the latter well-known as independence of irrelevant alternatives (or Luce's choice axiom), we derive novel diagnostic tools for testing the appropriateness of the EPL assumption as the actual sampling distribution of the observed rankings. These diagnostic tools can help uncovering possible idiosyncratic paths in the sequential choice process. Besides contributing to fill the gap of goodness-of-fit methods for the family of multistage models, we also show how one of the two statistics can be conveniently exploited to construct a heuristic method, that surrogates the maximum likelihood approach for inferring the underlying reference order parameter. The relative performance of the proposals, compared with more conventional approaches, is illustrated by means of extensive simulation studies.  相似文献   

2.
This article first provides a brief overview of some of the empirical and conceptual work in personality following Mischel's (1968) critique Strengths and weaknesses of traditional personality research designs including laboratory experiments, experimental personality designs, and preexperimental correlational studies are presented Some more recent approaches to designing laboratory research that attempt to address issues of external validity and approaches to designing field Investigations that attempt to address issues of internal validity are discussed Issues associated with the measurement of constructs, the broadening of measurement methods, and newer data analytic techniques are noted Finally, a brief orientation to the other papers in the special issue is presented  相似文献   

3.
A Thurstonian model for ranking data assumes that observed rankings are consistent with those of a set of underlying continuous variables. This model is appealing since it renders ranking data amenable to familiar models for continuous response variables—namely, linear regression models. To date, however, the use of Thurstonian models for ranking data has been very rare in practice. One reason for this may be that inferences based on these models require specialized technical methods. These methods have been developed to address computational challenges involved in these models but are not easy to implement without considerable technical expertise and are not widely available in software packages. To address this limitation, we show that Bayesian Thurstonian models for ranking data can be very easily implemented with the JAGS software package. We provide JAGS model files for Thurstonian ranking models for general use, discuss their implementation, and illustrate their use in analyses.  相似文献   

4.
This paper provides an empirical comparison of two methods of attribute valuation: the analytic hierarchy process (AHP) and conjoint analysis. Variants within each approach are also examined. The results of two empirical studies indicate that the methods differ in their predictive and convergent validity. Within the AHP methods no significant difference in predictive validity was found. Within the conjoint methods, the ranking method significantly outperformed the rating method. The difference in predictive validity between the AHP and conjoint methods was significant in the second study but not in the first study, suggesting superior performance of the AHP over conjoint analysis in complex problems. Copyright© 1998 John Wiley & Sons, Ltd.  相似文献   

5.
This paper introduces the concept of user validity and provides a new perspective on the validity of interpretations from tests. Test interpretation is based on outputs such as test scores, profiles, reports, spreadsheets of multiple candidates' scores, etc. The user validity perspective focuses on the interpretations a test user makes given the purpose of the test and the information provided in the test output. This innovative perspective focuses on how user validity can be extended to content, criterion, and to some extent construct‐related validity. It provides a basis for researching the validity of interpretations and an improved understanding of the appropriateness of different approaches to score interpretation, as well as how to design test outputs and assessments that are pragmatic and optimal.  相似文献   

6.
Few British organizations utilize handwriting analysis for personality assessment, but in some European countries handwriting analysis is extremely popular. Research has examined the reliability and validity of both methods of assessment, but few studies have directly compared the two. In this study, the personality of 120 subjects was assessed by the Cattell 16PF and by handwriting analysis. Each subject was presented with five handwriting analysis textual reports and five personality textual reports (one of each being their own) and asked to rank order each set in terms of perceived accuracy. The same ranking process was undertaken by each respondent's social partner. The results demonstrated that handwriting reports were ranked at a chance level by Self and by Other and that personality reports were ranked at a well above chance level by Self and by Other. Self-rankings were more accurate than Other-rankings.  相似文献   

7.
On the science of Rorschach research   总被引:1,自引:0,他引:1  
Wood et al.'s (1999b) article contained several general points that are quite sound. Conducting research with an extreme groups design does produce effect sizes that are larger than those observed in an unselected population. Appropriate control groups are important for any study that wishes to shed light on the characteristics of a targeted experimental group and experimental validity is enhanced when researchers collect data from both groups simultaneously. Diagnostic efficiency statistics--or any summary measures of test validity--should be trusted more when they are drawn from multiple studies conducted by different investigators across numerous settings rather than from a single investigator's work. There should be no question that these points are correct. However, I have pointed out numerous problems with specific aspects of Wood et al.'s (1999b) article. Wood et al. gave improper citations that claimed researchers found or said things that they did not. Wood et al. indicated my data set did not support the incremental validity of the Rorschach over the MMPI-2 when, in fact, my study never reported such an analysis and my data actually reveal that the opposite conclusion is warranted. Wood et al. asserted there was only one proper way to conduct incremental validity analyses even though experts have described how their recommended procedure can lead to significant complications. Wood et al. cited a section of Cohen and Cohen (1983) to bolster their claim that hierarchical and step-wise regression procedures were incompatible and to criticize Burns and Viglione's (1996) regression analysis. However, that section of Cohen and Cohen's text actually contradicted Wood et al.'s argument. Wood et al. tried to convince readers that Burns and Viglione used improper alpha levels and drew improper conclusions from their regression data although Burns and Viglione had followed the research evidence on this topic and the expert recommendations provided in Hosmer and Lemeshow's (1989) classic text. Wood et al. oversimplified issues associated with extreme group research designs and erroneously suggested that diagnostic studies were immune from interpretive confounds that can be associated with this type of design. Wood et al. ignored or dismissed the valid reasons why Burns and Viglione used an extreme groups design, and they never mentioned how Burns and Viglione used a homogeneous sample that actually was likely to find smaller than normal effect sizes. Wood et al. also overlooked the fact that Burns and Viglione identified their results as applying to female nonpatients; they never suggested their findings would characterize those obtained from a clinical sample. Wood et al. criticized composite measures although some of the most important and classic findings in the history of research on personality recommend composite measures as a way to minimize error and maximize validity. Wood et al. also were mistaken about the elements that constitute an optimal composite measure. Wood et al. apparently ignored the factor-analytic evidence that demonstrated how Burns and Viglione created a reasonable composite scale, and Wood et al. similarly ignored the clear evidence that supported the content and criterion related validity of the EMRF. With respect to the HEV, Wood et al. created a z-score formula that used the wrong means and standard deviations. They continued to use this formula despite being informed that it was incorrect. Subsequently, Wood et al. told readers that their faulty z-score formula was "incompatible" with the proper weighted formula and asserted that the two formulas "do not yield identical results" and "do not yield HEV scores that are identical or even very close." These published claims were made even though Wood et al. had seen the results from eight large samples, all of which demonstrated that their wrong formula had correlations greater than .998 with the correct formula. At worst, it seems that Wood et al. (199  相似文献   

8.
Frequent counselor errors in test interpretation include presenting test results too soon, interpreting them without reference to clearly defined criteria, erroneously assuming their validity, and failing to present them in terms the student can understand. These errors can be reduced through emphasis on the student's felt need for information, helping him state his questions operationally, basing interpretive statements on empirical validity, and by communicating test results in an understandable manner. Observations of behavior to determine student acceptance of the interpretation as well as self-report to determine recall and understanding of the interpretation should be used. The focus of evaluation should be on student recall, understanding, and acceptance of predictions derived from test results.  相似文献   

9.
Those who conduct integrated assessments (IAs) are aware of the need to explicitly consider multiple criteria and uncertainties when evaluating policies for preventing global warming. MCDM methods are potentially useful for understanding tradeoffs and evaluating risks associated with climate policy alternatives. A difficulty facing potential MCDM users is the wide range of different techniques that have been proposed, each with distinct advantages. Methods differ in terms of validity, ease of use, and appropriateness to the problem. Alternative methods also can yield strikingly different rankings of alternatives. A workshop was held in which climate change experts and policy makers evaluated the usefulness of MCDM for IA. Participants applied several methods in the context of a hypothetical greenhouse gas policy decision. Methods compared include value and utility functions, goal programming, ELECTRE, fuzzy sets, stochastic dominance, min max regret, and several weight selection methods. Ranges, rather than point estimates, were provided for some questions to incorporate imprecision regarding weights. Additionally, several visualization methods for both deterministic and uncertain cases were used and evaluated. Analysis of method results and participant feedback through questionnaires and discussion provide the basis for conclusions regarding the use of MCDM methods for climate change policy and IA analyses. Hypotheses are examined concerning predictive and convergent validity of methods, existence of splitting bias among experts, perceived ability of methods to aid decision‐making, and whether expressing imprecision can change ranking results. Because participants gained from viewing a problem from several perspectives and results from different methods often significantly differed, it appears worthwhile to apply several MCDM methods to increase user confidence and insight. The participants themselves recommended such multimethod approaches for policymaking. Yet they preferred the freedom of unaided decision‐making most of all, challenging the MCDM community to create transparent methods that permit maximum user control. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

10.
This paper presents an introduction to theoretically informed qualitative psychotherapy research (QPR). Although QPR researchers have traditionally remained silent on theory, we suggest this has resulted in an implicit and unacknowledged use of theory. We argue instead for a clear articulation of qualitative researchers' theory and outline how theory can be incorporated to inform the entire qualitative research process. This approach assumes the research problem is embedded in a clearly defined and articulated theoretical framework, which also informs data collection and data analysis. We outline how researchers can use explicit theoretical frameworks to inform research question formulation, data collection and data analysis and illustrate this with specific applications of the method in practice. We believe that starting from a declared theoretical framework sets up a dialogue between the research problem, the type of data required and their meaningful analysis and interpretation. This aims not only to achieve greater depth in the final product of research, but also to enhance its utility in terms of practice; it contributes to building, altering and differentiating theory; and it allows for greater transparency by openly articulating the theoretical framework that scaffolds the entirety of the research process.  相似文献   

11.
After briefly surveying the generally polemical pre-modern Christian views of Muhammad, this essay considers a range of recent Christian approaches. Daniel Madigan explores often unrecognized complexities involved in the question; he considers Muhammad's message a “salutary critique” prompting Christians to a fuller understanding of their faith. Hans Küng insists that Christians should recognize Muhammad as a prophet; Islam is akin to early Jewish forms of Christianity, whose validity should be recognized. Jacques Jomier and Christian Troll are respectful of Muhammad but argue that, if Christians call him a prophet, they effectively deny their own faith. Kenneth Cragg presents a “positive, critical position”, encouraging sympathetic Christian interpretation of Muhammad's achievement in his context, but expressing reservations about the “political equation” in his ministry and contrasting this with Christ's way of redemptive suffering. Cragg's approach is upheld against criticisms as an exemplary model of Christian theological engagement with Islam.  相似文献   

12.
In this article modern qualitative and mixed methods approaches are criticized from the standpoint of structural-systemic epistemology. It is suggested that modern qualitative methodologies suffer from several fallacies: some of them are grounded on inherently contradictory epistemology, the others ask scientific questions after the methods have been chosen, conduct studies inductively so that not only answers but even questions are often supposed to be discovered, do not create artificial situations and constraints on study-situations, are adevelopmental by nature, study not the external things and phenomena but symbols and representations—often the object of studies turns out to be the researcher rather than researched, rely on ambiguous data interpretation methods based to a large degree on feelings and opinions, aim to understand unique which is theoretically impossible, or have theoretical problems with sampling. Any one of these fallacies would be sufficient to exclude any possibility to achieve structural-systemic understanding of the studied things and phenomena. It also turns out that modern qualitative methodologies share several fallacies with the quantitative methodology. Therefore mixed methods approaches are not able to overcome the fundamental difficulties that characterize mixed methods taken separately. It is proposed that structural-systemic methodology that dominated psychological thought in the pre-WWII continental Europe is philosophically and theoretically better grounded than the other methodologies that can be distinguished in psychology today. Future psychology should be based on structural-systemic methodology.  相似文献   

13.
Current orthodoxy in research ethics assumes that subjects of clinical trials reserve rights to withdraw at any time and without giving any reason. This view sees the right to withdraw as a simple extension of the right to refuse to participate all together. In this paper, however, I suggest that subjects should assume some responsibilities for the internal validity of the trial at consent and that these responsibilities should be captured by contract. This would allow the researcher to impose a penalty on the subject if he were to withdraw without good reason and on a whim. This proposal still leaves open the possibility of withdrawing without penalty when it is in the subject's best interests to do so. Giving researchers recourse to legal remedy may now be necessary to protect the science, as existing methods used to increase retention are inadequate for one reason or another.  相似文献   

14.
Jung's writings on schizophrenia are almost completely ignored or forgotten today. The purpose of this paper, along with a follow‐up article, is to review the primary themes found in Jung's writings on schizophrenia, and to assess the validity of his theories about the disorder in light of our current knowledge base in the fields of psychopathology, cognitive neuroscience and psychotherapy research. In this article, five themes related to the aetiology and phenomenology of schizophrenia from Jung's writings are discussed:1) abaissement du niveau mental; 2) the complex; 3) mandala imagery; 4) constellation of archetypes and 5) psychological versus toxic aetiology. Reviews of the above areas suggest three conclusions. First, in many ways, Jung's ideas on schizophrenia anticipated much current thinking and data about the disorder. Second, with the recent (re)convergence of psychological and biological approaches to understanding and treating schizophrenia, the pioneering ideas of Jung regarding the importance of both factors and their interaction remain a useful and rich, but still underutilized resource. Finally, a more concerted effort to understand and evaluate the validity of Jung's concepts in terms of evidence from neuroscience could lead both to important advances in analytical psychology and to developments in therapeutic approaches that would extend beyond the treatment of schizophrenia.  相似文献   

15.
Selecting scholarship students from a number of competing candidates is a complex decision making process, in which multiple selection criteria have to be considered simultaneously. Multiattribute decision making (MADM) has proven to be an effective approach for ranking or selecting one or more alternatives from a finite number of alternatives with respect to multiple, usually conflicting criteria. This paper formulates the scholarship student selection process as an MADM problem, and presents suitable compensatory methods for solving the problem. A new empirical validity procedure is developed to deal with the inconsistent ranking problem caused by different MADM methods. The procedure aims at selecting a ranking outcome which has a minimum expected value loss, when true attribute weights are not known. An empirical study of a scholarship student selection problem in an Australian university is conducted to illustrate how the selection procedure works.  相似文献   

16.
Since item values obtained by item analysis procedures are not always stable from one situation to another, it follows that selection of items for validity or difficulty is sometimes useless. An application of Chi Square to testing homogeneity of item values is made, in the case of theUL method, and illustrative data are presented. A method of applying sampling theory to Horst's maximizing function is outlined, as illustrative of author's observation that the results of item analysis by any of various methods may be similarly tested.  相似文献   

17.
Rorschach interpretation often assumes that successive responses are not independent of one another but rather that they are part of a series of interconnected events. In this study, methods that have been used to analyze event sequence data were applied to Rorschach protocols. Results from a nonclinical group of 102 university students showed that location scores on successive responses were repeated more frequently than was predicted by chance. There was also a tendency for subjects to make transitions from larger to smaller or more detailed areas of the inkblot on successive responses. In addition, we found that subjects tended to make transitions from more adequate to less adequate use of form, and that the unusual and minus form categories tended to be repeated. A modest association between transition frequencies and individual differences in anxiety, but not between transition frequencies and depression or overall symptomatology, was demonstrated.  相似文献   

18.
Multicriteria decision‐making (MCDM) methods are concerned with the ranking of alternatives based on expert judgements made using a number of criteria. In the MCDM field, the distance‐based approach is one popular method for obtaining a final ranking. The technique for order preference by similarity to the ideal solution (TOPSIS) is a commonly used example of this kind of MCDM method. The TOPSIS ranks the alternatives with respect to their geometric distance from the positive and negative ideal solutions. Unfortunately, two reference points are often insufficient, especially for nonlinear problems. As a consequence of this situation, the final result ranking is prone to errors, including the rank reversals phenomenon. This study proposes a new distance‐based MCDM method: the characteristic objects method. In this approach, the preferences of each alternative are obtained on the basis of the distance from the nearest characteristic objects and their values. For this purpose, we have determined the domain and Fuzzy number set for all the considered criteria. The characteristic objects are obtained as the combination of the crisp values of all the Fuzzy numbers. The preference values of all the characteristic object are determined on the basis of the tournament method and the principle of indifference. Finally, the Fuzzy model is constructed and is used to calculate preference values of the alternatives, making it a multicriteria model that is free of rank reversal. The numerical example is used to illustrate the efficiency of the proposed method with respect to results from the TOPSIS method. The characteristic objects method results are more realistic than the TOPSIS results. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
Jones, Hughes, and Macken (2007) claim that their data and our own are inconsistent with a multicomponent working-memory model. We explain in greater detail how the model can account for the data and can address their more specific criticisms. Both sides accept that data relating to the presence of a phonological similarity effect throughout the list depend on list length. We accept that, at this point, all explanations of their interaction are speculative and require further empirical investigation. We examine J, H, & M's interpretation of their and our results in terms of an auditory modality effect, observing that their interpretation of this effect is not well supported by the literature. We suggest that their account assumes a very narrow basis for a general theory of short-term retention, in contrast to a phonological loop interpretation, which forms part of a well-developed and articulated model of working memory.  相似文献   

20.
Tom Andersen's Reflecting Team approach is widely (and creatively) employed in family therapy. Despite continuing enthusiasm for the practice, however, there are few journal articles reporting empirical research and only one (now dated) review of the literature. After defining reflecting team processes through practices that are embedded in particular approaches to knowledge construction and theoretical interpretation, we offer an overview of the empirical research found in our search of the literature. In the second half of this article we ask why there is so little existing research in this area. Various possible explanations are explored and future directions proposed. We conclude that a dialogue around the complex interweaving of practice, theory and research (that is, praxis) would be a helpful overall stance to adopt in relation to future work in this area.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号