首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Perceptual learning, improvement in perceptual skill with practice, can improve both accuracy and consistency of perceptual reports. Regression statistics can quantify ongoing calibration of perceptible scalar properties (i.e., improvements in accuracy and consistency) because, ideally, actual and perceived values are linearly related. Changes in variance accounted for (r2) track changes in consistency, and changes in both slope and intercept track changes in accuracy. Conjoint changes in all three regression statistics, obscured in separate plots, can be seen simultaneously in a perceptual calibration state space diagram, with the regression statistics as axes, in which an attractor (r2 = 1.00, slope = 1.00, intercept = 0.00) represents optimal performance. Decreases in the distance between the attractor and successive points in the state space, each representing perceptual performance, quantify perceptual learning; that distance is a perceptual calibration index. To show the utility of the perceptual calibration index, we illustrate its use in an experiment on wielding hand-held objects.  相似文献   

2.
Ratio versus difference comparators in choice.   总被引:1,自引:1,他引:0       下载免费PDF全文
Several theories in the learning literature describe decision rules for performance utilizing ratios and differences. The present paper analyzes rules for choice based on either delays to food, immediacies (the inverse of delays), or rates of food, combined factorially with a ratio or difference comparator. An experiment using the time-left procedure (Gibbon & Church, 1981) is reported with motivational differentials induced by unequal reinforcement durations. The preference results were compatible with a ratio-comparator decision rule, but not with decision rules based on differences. Differential reinforcement amounts were functionally equivalent to changes in delays to food. Under biased reinforcement, overall food rate was increased, but variance in preference was increased or decreased depending on which alternative was favored. This is a Weber law finding that is compatible with multiplicative, scalar sources of variance but incompatible with pacemaker rate changes proportional to food presentation rate.  相似文献   

3.
The aim of this paper is to present a dynamic goal programming scheme. Throughout the revised literature, most of the dynamic goal programming approaches use target values on the final value of the objective functionals. In this paper, dynamic target values are assumed, so that they control not only the final values of the corresponding functionals, but their evolution along the planning period. As a result, the scalar problems are also dynamic ones, where the evolution of the deviation variables is minimized. A lexicographic dynamic goal programming algorithm is developed on these basis, and some considerations are made on the efficiency of the final solutions. © 1998 John Wiley & Sons, Ltd.  相似文献   

4.
McFalls and Gallagher (1979) have found a strong relationship between the occupational values and political orientations of college students. Their study was based on the results of a sample survey conducted in 1969. However, the political climate on college campuses has changed dramatically, and so has the nature of the job market. A new survey was conducted in 1981 which was identical to the 1969 survey. Its objective was to determine if the same political group differentials in occupational values which existed in the politically tumultuous late sixties and early seventies still hold in the more placid 1980s. The findings are reported here.  相似文献   

5.
When analysts evaluate performance assessments, they often use modern measurement theory models to identify raters who frequently give ratings that are different from what would be expected, given the quality of the performance. To detect problematic scoring patterns, two rater fit statistics, the infit and outfit mean square error (MSE) statistics are routinely used. However, the interpretation of these statistics is not straightforward. A common practice is that researchers employ established rule-of-thumb critical values to interpret infit and outfit MSE statistics. Unfortunately, prior studies have shown that these rule-of-thumb values may not be appropriate in many empirical situations. Parametric bootstrapped critical values for infit and outfit MSE statistics provide a promising alternative approach to identifying item and person misfit in item response theory (IRT) analyses. However, researchers have not examined the performance of this approach for detecting rater misfit. In this study, we illustrate a bootstrap procedure that researchers can use to identify critical values for infit and outfit MSE statistics, and we used a simulation study to assess the false-positive and true-positive rates of these two statistics. We observed that the false-positive rates were highly inflated, and the true-positive rates were relatively low. Thus, we proposed an iterative parametric bootstrap procedure to overcome these limitations. The results indicated that using the iterative procedure to establish 95% critical values of infit and outfit MSE statistics had better-controlled false-positive rates and higher true-positive rates compared to using traditional parametric bootstrap procedure and rule-of-thumb critical values.  相似文献   

6.
This paper provides an evolutionary rationale for both interracial and intraracial wage differentials by examining the implications of white employers mediating their employer‐employee relationships on the basis of genetic similarity. If in organized labor markets; relationships mediated through genetic similarity are optimal in terms of Darwinian fitness, a fundamental evolutionary implication is that the Marginal Rate of Substitution (MRS) in Darwinian fitness holding extended fitness constant equals the MRS in preferences holding utility constant. Given such an evolutionary equilibrium, results are derived showing that the strength of tastes for discrimination depends upon the skin hue of non‐white workers. The rationale established for racial wage differentials is that where skin hue serves to indicate genetic similarity between employer and employee, wage differentials emerge that are a function of skin hue.  相似文献   

7.
This article reports a detailed examination of timing in the vibrotactile modality and comparison with that of visual and auditory modalities. Three experiments investigated human timing in the vibrotactile modality. In Experiment 1, a staircase threshold procedure with a standard duration of 1,000 ms revealed a difference threshold of 160.35 ms for vibrotactile stimuli, which was significantly higher than that for auditory stimuli (103.25 ms) but not significantly lower than that obtained for visual stimuli (196.76 ms). In Experiment 2, verbal estimation revealed a significant slope difference between vibrotactile and auditory timing, but not between vibrotactile and visual timing. That is, both vibrations and lights were judged as shorter than sounds, and this comparative difference was greater at longer durations than at shorter ones. In Experiment 3, performance on a temporal generalization task showed characteristics consistent with the predications of scalar expectancy theory (SET: Gibbon, 1977) with both mean accuracy and scalar variance exhibited. The results were modelled using the modified Church and Gibbon model (MCG; derived by Wearden, 1992, from Church & Gibbon 1982). The model was found to give an excellent fit to the data, and the parameter values obtained were compared with those for visual and auditory temporal generalization. The pattern of results suggest that timing in the vibrotactile modality conforms to SET and that the internal clock speed for vibrotactile stimuli is significantly slower than that for auditory stimuli, which is logically consistent with the significant differences in difference threshold that were obtained.  相似文献   

8.
Monte Carlo resampling methods to obtain probability values for chi-squared and likelihood-ratio test statistics for multiway contingency tables are presented. A resampling algorithm provides random arrangements of cell frequencies in a multiway contingency table, given fixed marginal frequency totals. Probability values are obtained from the proportion of resampled test statistic values equal to or greater than the observed test statistic value.  相似文献   

9.
The underlying assumptions of Fechnerian scaling are complemented by an assumption that ensures that any psychometric differential (the rise in the value of a discrimination probability function as one moves away from its minimum in a given direction) regularly varies at the origin with a positive exponent. This is equivalent to the following intuitively plausible property: any two psychometric differentials are comeasurable in the small (i.e., asymptotically proportional at the origin), without, however, being asymptotically equal to each other unless the corresponding values of the Fechner-Finsler metric function are equal. The regular variation version of Fechnerian scaling generalizes the previously proposed power function version while retaining its computational and conceptual simplicity.  相似文献   

10.
11.
Experience and problem representation in statistics   总被引:1,自引:0,他引:1  
This research investigated experience level differences in problem representation in statistics. A triad judgment task was designed so that source problems shared either surface similarity (story narrative) or structural (inferential level) features (t test, correlation, or chi-square) with the target problem. Graduate students with varying levels of experience in statistics were asked to choose which source problem "goes best" with the target problem for each triad. Given a choice between a problem that shares surface-level characteristics and one that shares inferential-level characteristics, students who had taken 0 to 4 courses in statistics tended to represent problems on the basis of surface-level features. Students who had more than 4 courses did not consistently make choices on the basis of surface-level features, nor did they consistently rely on structural features. However, all students with statistics course backgrounds noticed structural features when competition between different types of features was eliminated. The role of surface and structural features in determining problem representations is discussed.  相似文献   

12.
Federal fair employment legislation, administrative guidelines, and court cases that relate to training are reviewed. The review identified three issues: (a) selection practices which use training as a criterion, (b) justification of training programs where there has been a determination of disparate treatment, and (c) the use of training programs as a legitimate factor other than sex to justify pay differentials. The review indicates most Courts will not accept training success as a valid criterion for test selection, although tests for minimum standards might be validated against training alone; Courts generally review equal treatment factors rather than determining the business necessity of training; and finally, there are definite guidelines for demonstrating pay differentials based on training programs. The implications for courts and professionals conclude the paper.  相似文献   

13.
Computationally intensive methods of statistical inference do not fit the current canon of pedagogy in statistics. To accommodate these methods and the logic underlying them, I propose seven pedagogical principles: (1) Define inferential statistics as techniques for reckoning with chance. (2) Distinguish three types of research: sample surveys, in which statistics affords generalization from the cases studied; experiments, in which statistics detects systematic differences among the batches of data obtained in the several conditions; and correlational studies, in which statistics detects systematic associations between variables. (3) Teach random-sampling theory in the context of sample surveys, augmenting the conventional treatment with bootstrapping. Regarding experimentation, (4) note that random assignment fosters internal but not external validity, (5) explain the general logic for testing a null model, and (6) teach randomization tests as well ast,F, and χ2. (7) Regarding correlational studies, acknowledge the problems of applying inferential statistics in the absence of deliberately introduced randomness.  相似文献   

14.
Aim: This paper highlights some of the areas where there are problems with the way that statistics are conducted and reported in psychology journals. Recommendations are given for improving these problems. Sample: The choice of topics is based largely on the questions that authors, reviewers, and editors have asked in recent years. The focus is on null hypothesis significance testing (NHST), choosing a statistical test, and what should be included in results sections. Results: There are several ways to improve how statistics are reported. These should improve both the authors' and the readers' understanding of the data. Conclusions: Psychology as a discipline will improve if the way in which statistics are conducted and reported is improved. This will require effort from authors, scrutiny from reviewers, and stubbornness from editors.  相似文献   

15.
16.
Human performance on an analogue of an interval bisection task   总被引:3,自引:0,他引:3  
Two experiments used normal adult human subjects in an analogue of a time interval bisection task frequently used with animals. All presented durations were defined by the time between two very brief clicks, and all durations were less than 1 sec, to avoid complications arising from chronometric counting. In Experiment 1 different groups of subjects received standard durations of either 0.2 and 0.8 or 0.1 and 0.9 sec and then classified a range of durations including these values in terms of their similarity to the standard short (0.2- or 0.1-sec) and long (0.8- or 0.9-sec) durations. The bisection point (defined as the duration classified as "long" on 50% of trials) was located at 0.43 sec in the 0.2-0.8 group, and at 0.46 sec in the 0.1-0.9 group. Experiment 2 replicated Experiment 1 using a within-subject procedure. The bisection point of both 0.2- and 0.8 sec and 0.1- and 0.9-sec durations was found to be 0.44 sec. Both experiments thus found the bisection point to be located at a duration just lower than the arithmetic mean of the standard short and long durations, rather than at the geometric mean, as in animal experiments. Some other performance measures, such as difference limen, and Weber ratio, were, however, of similar values to those found in bisection tasks with animals. A theoretical model assuming that humans bisect by taking the difference between a presented duration and the short and long standards, as well as having a bias to respond "long", fitted the data well. The model incorporated scalar representations of standard durations and thus illustrated a way in which the obtained results, although different from those found with animal subjects, could be reconciled with scalar timing theory.  相似文献   

17.
A compact notation for obtaining and handling matrices of partial derivatives is suggested in an attempt to generalize "symbolic vector differentiation" to matrices of independent variables. The proposed technique differs from methods advocated by Dwyer and MacPhail (1948) and Wrobleski (1963) in several respects, notably in a deliberate limitation on the classes of scalar functions considered: traces and determinants. To narrow interest to these two classes of scalar matrix functions allows one to invoke certain algebraic identities which simplifies the problem, because (a) the treatment of traces of products of matrices can be reduced to that of a few representatives of large equivalence classes of such products, all having the same formal derivative, and because (b) the more involved task of differentiating determinants of matrix products can be translated into the more amenable problem of differentiating the traces of such products. A number of illustrative examples are included in an attempt to show that the above limitation is not as serious as might at first appear, because traces and determinants apply to a wide range of psychometric and statistical problems.  相似文献   

18.
Lord  Frederic M. 《Psychometrika》1960,25(4):325-342
Formulas are derived for using the available item statistics and score statistics on a test to estimate the moments of the score distribution of a lengthened (or shortened) form of the same test. Other formulas are derived for estimating the bivariate moments of the scatterplot between two parallel test forms using only the data available on either form alone. An empirical study is made showing in each case satisfactory agreement between the theoretical values predicted from the formulas and the values actually observed. These results suggest the utility of the true-score model used in deriving the formulas.This work was supported by contract Nonr-2752(00) between the Office of Naval Research and Educational Testing Service. Reproduction in whole or in part for any purpose of the United States Government is permitted.  相似文献   

19.
Twenty-four-month-old and 4-month-old rats were trained on a peak-interval procedure, where the time of reinforcement was varied twice between 20 and 40 sec. Peak times from the old rats were consistently longer than the reinforcement time, whereas those from younger animals tracked the 20- and 40-sec durations more closely. Different measures of performance suggested that the old rats were either (1) systematically misremembering the time of reinforcement or (2) using an internal clock with a substantially greater latency to start and stop timing than the younger animals. Old rats also adjusted more slowly to the first transition from 20 to 40 sec than did the younger ones, but not to later transitions. Correlations between measures derived from within-trial patterns of responding conformed in general to detailed predictions derived from scalar expectancy theory. However, some correlation values more closely resembled those derived from a study of peak-interval performance in humans and a theoretical model developed by Cheng and Westwood (1993), than those obtained in previous work with animals, for reasons that are at present unclear.  相似文献   

20.
Randomization tests are a class of nonparametric statistics that determine the significance of treatment effects. Unlike parametric statistics, randomization tests do not assume a random sample, or make any of the distributional assumptions that often preclude statistical inferences about single‐case data. A feature that randomization tests share with parametric statistics, however, is the derivation of a p‐value. P‐values are notoriously misinterpreted and are partly responsible for the putative “replication crisis.” Behavior analysts might question the utility of adding such a controversial index of statistical significance to their methods, so it is the aim of this paper to describe the randomization test logic and its potentially beneficial consequences. In doing so, this paper will: (1) address the replication crisis as a behavior analyst views it, (2) differentiate the problematic p‐values of parametric statistics from the, arguably, more useful p‐values of randomization tests, and (3) review the logic of randomization tests and their unique fit within the behavior analytic tradition of studying behavioral processes that cut across species.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号