首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Counseling studies have shown that increasing experience is not always associated with better judgments. However, in such studies performance is assessed against external criteria, which may lack validity. The authors applied the Cochran–Weiss–Shanteau (CWS) index, which assesses the ability to consistently discriminate. Results showed that novice counselors performed almost on the same level as very experienced counselors. The authors thus replicated earlier findings with a novel approach: applying an internal coherence criterion.  相似文献   

2.
Abstract.— Experimental results reported by Sjöberg, Anderson and Shanteau, and Tversky on the structure underlying judgments of the utility of gambles are discussed. Although the authors find different models applicable to their data, it is shown that they may all have the same theoretical basis. Differences in experimental design such as stimulus ranges and response measures used in the experiments may explain the divergent results reported earlier.  相似文献   

3.
This experiment sought to determine whether previously found metric violations of additive expectancy-value models C.F. J. C. Shanteau, Journal of Experimental Psychology, 1974, 103, 680–691; J. G. Lynch and J. L. Cohen, Journal of Personality and Social Psychology, 1978, 36, 1138–1151) were attributable to the inappropriateness of these models or to nonlinearities in the relationship between numerical ratings and underlying psychological impressions. Undergraduate participants performed two tasks employing the same experimental stimuli. In the first task, they rated the subjective values of hypothetical bets, judged separately and in combination. In the second task, they made pairwise comparisons of the same bets in terms of preference. The use of the same experimental stimuli in both tasks allowed a test of alternative models of utility judgment through application of the criterion of scale convergence (M. H. Birnbaum & C. T. Veit, Perception and Psychophysics, 1974, 15, 7–15). Results suggested that the additive expectancy-value model of judgments of the utilities of combinations of outcomes should be replaced by a weighted averaging rule in which the weight given to the value of each outcome in the averaging process is greater when this value is negative and extreme than when it is neutral.  相似文献   

4.
Weiss and Shanteau criticize the expert‐performance approach and argue that this approach has not, and most importantly, cannot be applied to the study of ‘experts’ in domains that lack readily available objective measures of performance, such as accuracy of judgments. In this response, I demonstrate that it is not necessary to use fictitious stimuli for the judgments, for which no correct responses can be identified, and where only their Cochrane, Weiss, and Shanteau index can be calculated. Instead, the expert performance approach regenerates the judgment situation for actual cases and tracks down their subsequent observed real‐world outcomes. Participants' judgments of the stimuli can then be directly scored against the actual outcomes. Opportunities for training and deliberate practice are discussed. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
This paper proposes an additive measure on the basis of compromise programming to evaluate fund performance from multiple criteria, of which the most usual are profitability and risk. This proposal is motivated by the fact that compromise programming is a sound decision support model to obtain scores of alternatives by minimizing weighted distances to an ideal point, the weights reflecting the investor's preferences for the criteria. To define the distance objective function, the linear‐quadratic composite metric is used, which combines advantages of linear and non‐linear objective functions. A critical advantage of compromise programming for fund performance evaluation is that the model can be extended to more than two financial criteria while other measures currently used (either ratio‐based or leverage‐based measures) only consider two criteria, say, profitability and risk. In the application, three investor's profiles are defined, which involve different weighting systems and lead to different fund rankings. These rankings are compared with domination relationships, the latter formulating if a fund is dominated or non‐dominated by convex combinations of other funds. Numerical tables are provided with data, computational process and results, which are analysed. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
The (univariate) isotonic psychometric (ISOP) model (Scheiblechner, 1995) is a nonparametric IRT model for dichotomous and polytomous (rating scale) psychological test data. A weak subject independence axiom W1 postulates that the subjects are ordered in the same way except for ties (i.e., similarly or isotonically) by all items of a psychological test. A weak item independence axiom W2 postulates that the order of the items is similar for all subjects. Local independence (LI or W3) is assumed in all models. With these axioms, sample-free unidimensional ordinal measurements of items and subjects become feasible. A cancellation axiom (Co) gives, as a result, the additive isotonic psychometric (ADISOP) model and interval scales for subjects and items, and an independence axiom (W4) gives the completely additive isotonic psychometric (CADISOP) model with an interval scale for the response variable (Scheiblechner, 1999). The d-ISOP, d-ADISOP, and d-CADISOP models are generalizations to d-dimensional dependent variables (e.g., speed and accuracy of response). The author would like to thank an Associate Editor and two anonymous referees and also Professor H.H. Schulze for their very valuable suggestions and corrections.  相似文献   

7.
In many areas of psychology, one is interested in disclosing the underlying structural mechanisms that generated an object by variable data set. Often, based on theoretical or empirical arguments, it may be expected that these underlying mechanisms imply that the objects are grouped into clusters that are allowed to overlap (i.e., an object may belong to more than one cluster). In such cases, analyzing the data with Mirkin’s additive profile clustering model may be appropriate. In this model: (1) each object may belong to no, one or several clusters, (2) there is a specific variable profile associated with each cluster, and (3) the scores of the objects on the variables can be reconstructed by adding the cluster-specific variable profiles of the clusters the object in question belongs to. Until now, however, no software program has been publicly available to perform an additive profile clustering analysis. For this purpose, in this article, the ADPROCLUS program, steered by a graphical user interface, is presented. We further illustrate its use by means of the analysis of a patient by symptom data matrix.  相似文献   

8.
Previous studies have typically found that when people learn to combine two dimensions of a stimulus to select a response, they learn additive combination rules more easily than nonadditive (e.g., multiplicative) ones. The present experiments demonstrate that in some situations people can learn multiplicative rules more easily than other (e.g., additive) rules. Subjects learned to produce specified response durations when presented with stimulus lines varying in length and angle of orientation. When stimuliand correct responses were related by a multiplicative combination of power functions, learning was relatively easy (Experiment 1). In contrast, systematic response biases occurred during the early phases of learning an additive combination of linear functions (Experiment 2) and a more complex (nonadditive and nonmultiplicative) combination of linear functions (Experiment3), suggesting that people have a tendency to induce a multiplicative combination of power functions. However, the initial biases decreased with practice. These results are explained in terms of a revised adaptive regression model of function learning originally proposed by Koh and Meyer (1991). Differences between the present results and previous results in the literature are discussed.  相似文献   

9.
Postulating that the predisposition to illness in Claridge's disease model of schizophrenia can be equated with the personality dimensions S or Insensitivity, (low) E or Extraversion, and N or Neuroticism, as measured by Van Kampen's 3DPT, and assuming that the mode of transmission of schizophrenia is basically polygenic, the genetic and environmental etiology of S, E, and N was assessed in a sample of 52 MZ and 76 DZ twin pairs and their parents by means of LISREL. Besides, in a sample of 2118 subjects MAXCOV–HITMAX analyses were conducted for these factors as well as for the personality dimension G or Orderliness, but now assessed by the 4DPT, in order to find out whether a discrete or quasi‐discrete variable might also underlie these dimensions, giving support to the possibility of dominance or epistasis. The results obtained in these investigations favoured a model for all three dimensions, allowing for both additive and non‐additive genetic effects in combination with non‐shared environmental influences. It was not possible to choose between a model involving dominance and a model involving epistatic genetic effects. With the use of scores corrected for sex and age, which were converted to normal scores, the proportion of variance explained by additive genetic factors was 20% for S, 40–41% for E, and 26–29% for N. Dominance or multiple‐gene epistasis accounted for 37–38% (S), 19–20% (E), and 30–31% (N), and unshared environmental influences for 42–43% (S), 41% (E), and 42–43% (N) respectively. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

10.
A conjoint measurement procedure is used for the measurement of binocular brightness as a function of left and right luminance inputs. For nonzero stimulation, the data confirm earlier findings: the system can be described as additive with a scale exponent of 1. If zero stimulation is included, however, no additive solution can be found (due to Fechner’s paradox). This fact, combined with various critical remarks in the literature with respect to the existence of a real luminance-averaging system, has led us to propose a model which takes account of Fechner’s paradox, and incorporates “realistic” exponents without requiring a multistage processing mechanism where different levels are characterized by different sensory scales. The proposed model makes the weighting coefficients for the two eyes dependent in a continuous way on the strength of stimulation in the two eyes, especially on the amount of contrast of the monocular stimuli. For zero background stimulation, contrast can be expressed in terms of luminance of the stimulus. In this way, the model is reduced to a simple testable form. While it much simpler than Engel’s (1969) model, the experimental results indicate that it might also work for the more general case.  相似文献   

11.
A new approach to problem solving was applied to multisolution problems in a memory search task. Subjects memorized a list of eight four-letter foods, and then searched mentally through the list for answers to questions. The times between successive answers (IRTs) were recorded along with the answers themselves. This allowed a comparison of two possible memory search strategies: (1) sampling with replacement, and (2) sampling without replacement. The results were largely in agreement with the sampling-without-replacement strategy. However, a more detailed breakdown of the data revealed that most subjects searched through the list in a rigid serial order. Further, an analysis of questions with identical answers showed that the IRTs were very nearly additive. This led to an additive time component model based on the independent summation of (a) read-in time, (b) memory-search time, (c) decision-making time, and (d)response-output time. This approach appeared generally more satisfactory than previous attempts to account for problem-solving behavior.  相似文献   

12.
The monitoring of information acquisition behavior, along with other process tracing measures such as response times, was used to examine how individuals process information about gambles into a decision. Subjects indicated preferences among specially constructed three-outcome gambles. The number of alternatives available was varied across the sets of gambles. A majority of the subjects processed information about the gambles in ways inconsistent with compensatory models of risky decision making, such as information integration (Anderson & Shanteau, 1970). Furthermore, the inconsistency between observed information acquisition behavior and such compensatory rules increased as the choice task became more complex. Alternative explanations of risky choice behavior are considered.  相似文献   

13.
This research examined differences in classification strategies in object and social domains. Wattenmaker (1995) found that additive classification rules were more compatible with the social than the object domain. The present experiments examined the generality of these results by using fundamentally different types of social and object categories. A sorting paradigm was used to evaluate the frequency with which subjects used additive strategies. In Experiment 1, the social domain was represented by social events that possess very different properties than core social concepts such as traits or occupations. Even with these types of social materials, however, many more additive strategies and family resemblance sorts occurred with social than object materials. In Experiment 2, the object domain was represented by abstract object categories that were designed to possess properties of core social concepts such as traits. Again, however, more additive strategies and family resemblance sorts occurred with social than object materials. The results indicate that differences in the compatibility between additive strategies and object and social domains are not limited to subsets of categories in these domains but rather extend to many types of object and social categories.  相似文献   

14.

Objective

Variability in infant sleep and negative affective behavior (NAB) is a developmental phenomenon that has long been of interest to researchers and clinicians. However, analyses and delineation of such temporal patterns were often limited to basic statistical approaches, which may prevent adequate identification of meaningful variation within these patterns. Modern statistical procedures such as additive models may detect specific patterns of temporal variation in infant behavior more effectively.

Method

Hundred and twenty-one mothers were asked to record different behaviors of their 4–44 weeks old healthy infants by diaries for three days consecutively. Circadian patterns as well as individual trajectories and day-to-day variability of infant sleep and NAB were modeled with generalized linear models (GLMs) including a linear and quadratic polynomial for time, a GLM with a polynomial of the 8th order, a GLM with a harmonic function, a generalized linear mixed model (GLMM) with a polynomial of the 8th order, a generalized additive model, and a generalized additive mixed model (GAMM).

Results

The semi-parametric model GAMM was found to fit the data of infant sleep better than any other parametric model used. GLMM with a polynomial of the 8th order and GAMM modeled temporal patterns of infant NAB equally well, although the GLMM exhibited a slightly better model fit while GAMM was easier to interpret. Besides the well-known evening clustering in infant NAB we found a significant second peak in NAB around midday that was not affected by the constant decline in the amounts of NAB across the 3-day study period.

Conclusion

Using advanced statistical procedures (GAMM and GLMM) even small variations and phenomena in infant behavior can be reliably detected. Future studies investigating variability and temporal patterns in infant variables may benefit from these statistical approaches.  相似文献   

15.
When traits are measured by the same method (or at the same time) the intertrait correlations are higher than when the intertrait correlation is across methods. An empirical investigation of the nature of such method factors is reported, in which the method factors seem to operate in a multiplicative rather than additive way, or in which larger method loadings are associated with larger trait loadings. An implicit-theory-of-personality explanation is offered for the case of ratings as methods. An attenuation model has more general applicability to the oases illustrated. The possibility is raised that the finding, if confirmed in other domains, demonstrates a fundamental innappropriateness of factor analysis for the componential analysis of indi6dual differences data.  相似文献   

16.
Prior to a three-way component analysis of a three-way data set, it is customary to preprocess the data by centering and/or rescaling them. Harshman and Lundy (1984) considered that three-way data actually consist of a three-way model part, which in fact pertains to ratio scale measurements, as well as additive “offset” terms that turn the ratio scale measurements into interval scale measurements. They mentioned that such offset terms might be estimated by incorporating additional components in the model, but discarded this idea in favor of an approach to remove such terms from the model by means of centering. Then estimates for the three-way component model parameters are obtained by analyzing the centered data. In the present paper, the possibility of actually estimating the offset terms is taken up again. First, it is mentioned in which cases such offset terms can be estimated uniquely. Next, procedures are offered for estimating model parameters and offset parameters simultaneously, as well as successively (i.e., providing offset term estimates after the three-way model parameters have been estimated in the traditional way on the basis of the centered data). These procedures are provided for both the CANDECOMP/PARAFAC model and the Tucker3 model extended with offset terms. The successive and the simultaneous approaches for estimating model and offset parameters have been compared on the basis of simulated data. It was found that both procedures perform well when the fitted model captures at least all offset terms actually underlying the data. The simultaneous procedures performed slightly better than the successive procedures. If fewer offset terms are fitted than there are underlying the model, the results are considerably poorer, but in these cases the successive procedures performed better than the simultaneous ones. All in all, it can be concluded that the traditional approach for estimating model parameters can hardly be improved upon, and that offset terms can sufficiently well be estimated by the proposed successive approach, which is a simple extension of the traditional approach. The author is obliged to Jos M.F. ten Berge and Marieke Timmerman for helpful comments on an earlier version of this paper. The author is obliged to Iven van Mechelen for making available the data set used in Section 6.  相似文献   

17.
Behavior analysis and statistical inference have shared a conflicted relationship for over fifty years. However, a significant portion of this conflict is directed toward statistical tests (e.g., t‐tests, ANOVA) that aggregate group and/or temporal variability into means and standard deviations and as a result remove much of the data important to behavior analysts. Mixed‐effects modeling, a more recently developed statistical test, addresses many of the limitations of more basic tests by incorporating random effects. Random effects quantify individual subject variability without eliminating it from the model, hence producing a model that can predict both group and individual behavior. We present the results of a generalized linear mixed‐effects model applied to single‐subject data taken from Ackerlund Brandt, Dozier, Juanico, Laudont, & Mick, 2015, in which children chose from one of three reinforcers for completing a task. Results of the mixed‐effects modeling are consistent with visual analyses and importantly provide a statistical framework to predict individual behavior without requiring aggregation. We conclude by discussing the implications of these results and provide recommendations for further integration of mixed‐effects models in the analyses of single‐subject designs.  相似文献   

18.
Additive similarity trees   总被引:20,自引:0,他引:20  
Similarity data can be represented by additive trees. In this model, objects are represented by the external nodes of a tree, and the dissimilarity between objects is the length of the path joining them. The additive tree is less restrictive than the ultrametric tree, commonly known as the hierarchical clustering scheme. The two representations are characterized and compared. A computer program, ADDTREE, for the construction of additive trees is described and applied to several sets of data. A comparison of these results to the results of multidimensional scaling illustrates some empirical and theoretical advantages of tree representations over spatial representations of proximity data.We thank Nancy Henley and Vered Kraus for providing us with data, and Jan deLeeuw for calling our attention to relevant literature. The work of the first author was supported in part by the Psychology Unit of the Israel Defense Forces.  相似文献   

19.
Previous research has uncovered many conditions that encourage base‐rate use. The present research investigates how base‐rates are used when conditions are manipulated to encourage their use in the lawyer/engineer paradigm. To examine the functional form of the response to base‐rate, a factorial design was employed in which both base‐rate and the individuating information were varied within‐subject. We compared the performance of several models of base‐rate use, including a model that allows base‐rate and individuating information to be combined in a strictly additive fashion, and a model which presumes that respondents use Bayes' Rule in forming their judgments. Results from 1493 respondents showed that the additive model is a stronger predictor of base‐rate use than any other model considered, suggesting that the base‐rate and individuating information are processed independently in the lawyer/engineer paradigm. A possible mechanism for this finding is discussed. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

20.
A recent meta-analysis of 103 studies Burt (Clinical Psychology Review, 29:163-178, 2009a) highlighted the presence of etiological distinctions between aggressive (AGG) and non-aggressive rule-breaking (RB) dimensions of antisocial behavior, such that AGG was more heritable than was RB, whereas RB was more influenced by the shared environment. Unfortunately, behavioral genetic research on antisocial behavior to date (and thus, the research upon which the meta-analysis was based) has relied almost exclusively on the classical twin model. This reliance is problematic, as the strict assumptions that undergird this model (e.g., shared environmental and dominant genetic influences are not present simultaneously; there is no assortative mating) can have significant consequences on heritability estimates when they are violated. The nuclear twin family model, by contrast, allows researchers to relax and statistically evaluate many of the assumptions of the classical twin design by incorporating parental self-report data along with the more standard twin data. The goal of the current study was thus to evaluate whether prior findings of etiological distinctions between AGG and RB persisted when using the nuclear twin family model. We examined a sample of 312 child twin families from the Michigan State University Twin Registry. Results strongly supported prior findings of etiological distinctions between AGG and RB, such that broad genetic influences were observed to be particularly important to AGG whereas shared environmental influences contributed only to RB. Nevertheless, the current findings also implied that additive genetic influences on antisocial behavior may be overestimated when using the classical twin design.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号