首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In both theoretical and applied literatures, there is confusion regarding accurate values for expected Black–White subgroup differences in personnel selection test scores. Much confusion arises because empirical estimates of standardized subgroup differences (d) are subject to many of the same biasing factors associated with validity coefficients (i.e., d is functionally related to a point‐biserial r). To address such issues, we review/cumulate, categorize, and analyze a systematic set of many predictor‐specific meta‐analyses in the literature. We focus on confounds due to general use of concurrent, versus applicant, samples in the literature on Black–White d. We also focus on potential confusion due to different constructs being assessed within the same selection test method, as well as the influence of those constructs on d. It is shown that many types of predictors (such as biodata inventories or assessment centers) can have magnitudes of d that are much larger than previously thought. Indeed, some predictors (such as work samples) can have ds similar to that associated with paper‐and‐pencil tests of cognitive ability. We present more realistic values of d for both researcher and practitioner use. Implications for practice and future research are noted.  相似文献   

2.
Variability in strategy selection is an important characteristic of learning new skills such as mathematical skills. Strategies gradually come and go during this development. In 1996, Siegler described this phenomenon as "overlapping waves." In the current microgenetic study, we attempted to model these overlapping waves statistically. In addition, we investigated whether development in strategy selection is related to development in accuracy and to what degree working memory is related to both. We expected that children with poor working memory are limited in their possibilities to make the associations that are necessary to progress to more mature strategies. This limitation would explain the often-found relationship between working memory and mathematical abilities. To this aim, the strategy selection and accuracy of 98 children who were learning single-digit multiplication was assessed eight times on a weekly basis. Using latent growth modeling for categorical data, we confirmed Siegler's hypothesis of overlapping waves. Moreover, both the intercepts and the slopes of strategy selection and accuracy were strongly interrelated. Finally, working memory predicted both strategy selection and accuracy, confirming that working memory is related to mathematical problem solving in two ways because it influences both the maturity of strategy choice and the probability of making procedural mistakes.  相似文献   

3.
Do various operational definitions of visual attention tap the same underlying process? To address this question, we probed visual selective attention using orientation of attention, flanker, and Stroop tasks. These were embedded in combined designs that enabled assessment of each effect, as well as their interaction. For the orientation task, performance was poorer at unexpected than at expected locations. The flanker effects also differed across the two locations. In contrast, the Stroop effects were comparable at expected and unexpected locations. We conclude that spatial attention (tapped by the orientation and the flanker tasks) and dimensional attention (tapped by the Stroop task) engage separate processes of visual selection, both of which are needed in normal attention processing.  相似文献   

4.
The gain from selection (GS) is defined as the standardized average performance of a group of subjects selected in a future sample using a regression equation derived on an earlier sample. Expressions for the expected value, density, and distribution function (DF) of GS are derived and studied in terms of sample size, number of predictors, and the prior distribution assigned to the population multiple correlation. The DF of GS is further used to determine how large sample sizes must be so that with probability .90 (.95), the expected GS will be within 90 percent of its maximum possible value. An approximately unbiased estimator of the expected GS is also derived.  相似文献   

5.
In this paper, modern statistics is considered as a branch of psychometrics and the question of how the central problems of statistics can be resolved using psychometric methods is investigated. Theories and methods developed in the fields of test theory, scaling, and factor analysis are related to the principle problems of modern statistical theory and method. Topics surveyed include assessment of probabilities, assessment of utilities, assessment of exchangeability, preposterior analysis, adversary analysis, multiple comparisons, the selection of predictor variables, and full-rank ANOVA. Reference is made to some literature from the field of cognitive psychology to indicate some of the difficulties encountered in probability and utility assessment. Some methods for resolving these difficulties using the Computer-Assisted Data Analysis (CADA) Monitor are described, as is some recent experimental work on utility assessment.1980 Psychometric Society presidential address.I am indebted to Paul Slovic and David Libby for valuable consultation on the issues discussed in this paper and to Nancy Turner and Laura Novick for assistance in preparation.Research reported herein was supported under contract number N00014-77-C-0428 from the Office of Naval Research to The University of Iowa, Melvin R. Novick, principal investigator. Opinions expressed herein reflect those of the author and not those of sponsoring agencies.  相似文献   

6.
Two types of mechanisms have dominated theoretical accounts of efficient visual search. The first are bottom-up processes related to the characteristics of retinotopic feature maps. The second are top-down mechanisms related to feature selection. To expose the potential involvement of other mechanisms, we introduce a new search paradigm whereby a target is defined only in a context-dependent manner by multiple conjunctions of feature dimensions. Because targets in a multiconjunction task cannot be distinguished from distractors either by bottom-up guidance or top-down guidance, current theories of visual search predict inefficient search. While inefficient search does occur for the multiple conjunctions of orientation with color or luminance, we find efficient search for multiple conjunctions of luminance/size, luminance/shape, and luminance/topology. We also show that repeated presentations of either targets or a set of distractors result in much faster performance and that bottom-up feature extraction and top-down selection cannot account for efficient search on their own. In light of this, we discuss the possible role of perceptual organization in visual search. Furthermore, multiconjunction search could provide a new method for investigating perceptual grouping in visual search.  相似文献   

7.
Abstract:  In test operations using IRT (item response theory), items are included in a test before being used to rate subjects and the response data is used to estimate their item parameters. However, this method of test operation may lead to item content leakage and an adequate test operation can become difficult. To address this problem, Ozaki and Toyoda (2005, 2006 ) developed item difficulty parameter estimation methods that use paired comparison data from the perspective of the difficulty of items as judged by raters familiar with the field. In the present paper, an improved method of item difficulty parameter estimation is developed. In this new method, an item for which the difficulty parameter is to be estimated is compared with multiple items simultaneously, from the perspective of their difficulty. This is not a one-to-one comparison but a one-to-many comparison. In the comparisons, raters are informed that items selected from an item pool are ordered according to difficulty. The order will provide insight to improve the accuracy of judgment.  相似文献   

8.
A central question in philosophical and sociological accounts of technology is how the agency of technologies should be conceived, that is, how to understand their constitutive roles in the actions performed by assemblages of humans and artifacts. To address this question, I build on the suggestion that a helpful perspective can be gained by amalgamating “actor-network theory” and “postphenomenological” accounts. The idea is that only a combined account can confront both the nuances of human experiential relationships with technology on which postphenomenology specializes, and also the chains of interactions between numerous technologies and humans that actor-network theory can address. To perform this amalgamation, however, several technical adjustments to these theories are required. The central change I develop here is to the postphenomenological notion of “multistability,” i.e., the claim that a technology can be used for multiple purposes through different contexts. I expand the postphenomenological framework through the development of a method called “variational cross-examination,” which involves critically contrasting the various stabilities of a multistable technology for the purpose of exploring how a particular stability has come to dominate. As a guiding example, I explore the case of the everyday public bench. The agency of this “mundane artifact,” as actor-network theorist Bruno Latour would call it, cannot be accounted for by either postphenomenology or actor-network theory alone.  相似文献   

9.
A procedure is presented for determining the mean and variance of the selection differential for top‐down selections in which the candidates come from populations that have a different average score on the selection measure. Although the procedure is based on the same stochastic model and requires identical data to the currently available method for estimating the mean selection differential, it has the advantage that the resulting expressions are valid for finite‐sample selection decisions and that the variance of the selection differential can also be assessed. The difference between the two procedures is illustrated by means of an example application, and it is shown how the present results are particularly helpful in determining the expected utility of personnel selection decisions.  相似文献   

10.
Ben-Yashar R  Nitzan S  Vos HJ 《Psicothema》2006,18(3):652-660
This paper compares the determination of optimal cutoff points for single and multiple tests in the field of personnel selection. Decisional skills of predictor tests composing the multiple test are assumed to be endogenous variables that depend on the cutting points to be set. It is shown how the predictor cutoffs and the collective decision rule are determined dependently by maximizing the multiple test's common expected utility. Our main result specifies the condition that determines the relationship between the optimal cutoff points for single and multiple tests, given the number of predictor tests, the collective decision rule (aggregation procedure of predictor tests' recommendations) and the function relating the tests' decisional skills to the predictor cutoff points. The proposed dichotomous decision-making method is illustrated by an empirical example of selecting trainees by means of the Assessment Center method.  相似文献   

11.
Selecting scholarship students from a number of competing candidates is a complex decision making process, in which multiple selection criteria have to be considered simultaneously. Multiattribute decision making (MADM) has proven to be an effective approach for ranking or selecting one or more alternatives from a finite number of alternatives with respect to multiple, usually conflicting criteria. This paper formulates the scholarship student selection process as an MADM problem, and presents suitable compensatory methods for solving the problem. A new empirical validity procedure is developed to deal with the inconsistent ranking problem caused by different MADM methods. The procedure aims at selecting a ranking outcome which has a minimum expected value loss, when true attribute weights are not known. An empirical study of a scholarship student selection problem in an Australian university is conducted to illustrate how the selection procedure works.  相似文献   

12.
What happens when people try to forget something? What are the consequences of instructing people to intentionally forget a sentence? Recent studies employing the item‐method directed forgetting paradigm have shown that to‐be‐forgotten (TBF) items are, in a subsequent task, emotionally devaluated relative to to‐be‐remembered (TBR) items, an aftereffect of memory selection (Vivas, Marful, Panagiotidou & Bajo, 2016). As such, distractor devaluation by attentional selection generalizes to memory selection. In this study, we use the item‐method directed forgetting paradigm to test the effects of memory selection and inhibition on truth judgments of ambiguous sentences. We expected the relative standing of an item in the task (i.e., whether it was instructed to be remembered or forgotten) to affect the truthfulness value of that item, making TBF items less valid/truthful than TBR items. As predicted, ambiguous sentences associated with a “Forget” cue were subsequently judged as less true than sentences associated with a “Remember” cue, suggesting that instructions to intentionally forget a statement can produce changes in the validity/truthfulness of that statement. To our knowledge, this is the first study to show an influence of memory processes involved in selection and forgetting on the perceived truthfulness of sentences.  相似文献   

13.
The use of validated employee selection and promotion procedures is critical to workforce productivity and to the legal defensibility of the personnel decisions made on the basis of those procedures. Consequently, there have been numerous scholarly developments that have considerable implications for the appropriate conduct of criterion‐related validity studies. However, there is no single resource researchers can consult to understand how these developments impact practice. The purpose of this article is to summarize and critically review studies published primarily within the past 10 years that address issues pertinent to criterion‐related validation. Key topics include (a) validity coefficient correction procedures, (b) the evaluation of multiple predictors, (c) differential prediction analyses, (d) validation sample characteristics, and (e) criterion issues. In each section, we discuss key findings, critique and note limitations of the extant research, and offer conclusions and recommendations for the planning and conduct of criterion‐related studies. We conclude by discussing some important but neglected validation issues for which more research is needed.  相似文献   

14.
Theattentional blink (AB) andrepetition blindness (RB) phenomena refer to subjects’ impaired ability to detect the second of two different (AB) or identical (RB) target stimuli in a rapid serial visual presentation stream if they appear within 500 msec of one another. Despite the fact that the AB reveals a failure of conscious visual perception, it is at least partly due to limitations at central stages of information processing. Do all attentional limits to conscious perception have their locus at this central bottleneck? To address this question, here we investigated whether RB is affected by online response selection, a cognitive operation that requires central processing. The results indicate that, unlike the AB, RB does not result from central resource limitations. Evidently, temporal attentional limits to conscious perception can occur at multiple stages of information processing.  相似文献   

15.
Chris Eliasmith 《Synthese》2007,159(3):373-388
To have a fully integrated understanding of neurobiological systems, we must address two fundamental questions: 1. What do brains do (what is their function)? and 2. How do brains do whatever it is that they do (how is that function implemented)? I begin by arguing that these questions are necessarily inter-related. Thus, addressing one without consideration of an answer to the other, as is often done, is a mistake. I then describe what I take to be the best available approach to addressing both questions. Specifically, to address 2, I adopt the Neural Engineering Framework (NEF) of Eliasmith &; Anderson [Neural engineering: Computation representation and dynamics in neurobiological systems. Cambridge, MA: MIT Press, 2003] which identifies implementational principles for neural models. To address 1, I suggest that adopting statistical modeling methods for perception and action will be functionally sufficient for capturing biological behavior. I show how these two answers will be mutually constraining, since the process of model selection for the statistical method in this approach can be informed by known anatomical and physiological properties of the brain, captured by the NEF. Similarly, the application of the NEF must be informed by functional hypotheses, captured by the statistical modeling approach.  相似文献   

16.
A great deal of time, effort and money is involved in the training of counsellors; hence it is most important that the selection process discriminates between those candidates who are likely to achieve the standard required at the end of the training and those who are not. This paper discusses some of the issues pertinent to the selection of candidates for training as counsellors. It considers the question of whether counsellors are born or made, in other words whether anyone can be a good counsellor given the right training or whether they have to be suitable before they start. The paper uses Kohutian concepts to address issues of narcissism, as well as considering equal opportunities and personal development.  相似文献   

17.
The Doolittle, Wherry-Doolittle, and Summerfield-Lubin methods of multiple correlation are compared theoretically as well as by an application in which a set of predictors is selected. Wherry's method and the Summerfield-Lubin method are shown to be equivalent; the relationship of these methods to the Doolittle method is indicated. The Summerfield-Lubin method, because of its compactness and ease of computation, and because of the meaningfulness of the interim computational values, is recommended as a convenient least squares method of multiple correlation and predictor selection.  相似文献   

18.
In a fuzzy multiple criteria decision‐making (MCDM) problem, with a hierarchical structure of more than two levels and involving multiple decision‐makers (DMs), to find the exact membership functions of the final aggregation ratings of all feasible alternatives is almost impossible. Thus, ranking methods based on exact membership functions cannot be utilized to rank the feasible alternatives and complete the optimal selection. To resolve the above‐mentioned complexity and to incorporate assessments of all DMs' viewpoints, in this paper a fuzzy MCDM method with multiple DMs, based on the concepts of fuzzy set theory and α‐cut, is developed. This method incorporates a number of perspectives on how to approach the fuzzy MCDM problem with multiple DMs, as follows: (1) combining quantitative and qualitative criteria as well as negative and positive ones; (2) using the generalized means to develop the aggregation method of multiple DMs' opinions; (3) incorporating the risk attitude index β to convey the total risk attitude of all DMs by using the estimation data obtained at the data input stage; (4) employing the algebraic operations of fuzzy numbers based on the concept of α‐cut to calculate the final aggregation ratings and develop a matching ranking method for proposed fuzzy MCDM method with multiple DMs. Furthermore, we use this method to survey the site selection for free port zone (FPZ) in Taiwan as an empirical study to demonstrate the proposed fuzzy MCDM algorithm. The result of this empirical investigation shows that the port of Kaohsiung, the largest international port of Taiwan as well as the sixth container port in the world in 2004, is optimal for the Taiwan government in enacting the plan of FPZ. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

19.
A complex selection situation encompasses vacancies for several different positions and applicants that apply simultaneously for one or several of these positions. This article presents an analytic method for estimating the expected selection quality, as well as the adverse impact ratio of these complex selections, when the decisions are based on a single predictor composite score. In addition, the method is integrated within a broader decision‐making framework for designing complex selection decisions that show a Pareto‐optimal balance between the selection quality and diversity goals. Finally, the decision aid is used to demonstrate the importance of applying the appropriate selection format (either the simple or the complex format) when exploring the front of Pareto‐optimal outcomes of planned selections.  相似文献   

20.
Political psychologists have been quick to use prospect theory in their work, realizing its potential for explaining decisions under risk. Applying prospect theory to political decision‐making is not without problems, though, and here we address two of these: (1) Does prospect theory actually apply to political decision‐makers, or are politicians unlike the rest of us? (2) Which dimension do politicians use as their reference point when there are multiple dimensions (e.g., votes and policy)? We address both problems in an experiment with a unique sample of Dutch members of parliament as participants. We use well‐known (incentivized) decision situations and newly developed hypothetical political decision‐making scenarios. Our results indicate that politicians’ deviate from expected utility theory in the direction predicted by prospect theory but that these deviations are somewhat smaller than those of other people. Votes appear to be a more important determinant of politicians’ reference point than is policy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号