首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Serves as an introduction to a special section of the journal on laboratory and performance-based measures of childhood disorders. The articles in the special section were part of the work of a task force established by Division 12 of the American Psychological Association on "Upgrading the Science and Technology of Assessment and Diagnosis." In this introduction, I raise a number of issues involved in the use of laboratory and performance-based measures for the assessment of childhood psychopathology that cut across the different disorders covered in the special section. Some of these issues are common to most techniques used in the assessment of childhood psychopathology; others are more specific to this particular method of assessment. However, by focusing on these issues related to the use of laboratory and performance-based measures, it will hopefully encourage a critical examination of all techniques currently being used in the assessment of psychopathology and highlight important issues involved in translating measures that were developed primarily for use in research into forms that are useful in clinical practice.  相似文献   

2.
This paper studies three models for cognitive diagnosis, each illustrated with an application to fraction subtraction data. The objective of each of these models is to classify examinees according to their mastery of skills assumed to be required for fraction subtraction. We consider the DINA model, the NIDA model, and a new model that extends the DINA model to allow for multiple strategies of problem solving. For each of these models the joint distribution of the indicators of skill mastery is modeled using a single continuous higher-order latent trait, to explain the dependence in the mastery of distinct skills. This approach stems from viewing the skills as the specific states of knowledge required for exam performance, and viewing these skills as arising from a broadly defined latent trait resembling the θ of item response models. We discuss several techniques for comparing models and assessing goodness of fit. We then implement these methods using the fraction subtraction data with the aim of selecting the best of the three models for this application. We employ Markov chain Monte Carlo algorithms to fit the models, and we present simulation results to examine the performance of these algorithms. The work reported here was performed under the auspices of the External Diagnostic Research Team funded by Educational Testing Service. Views expressed in this paper does not necessarily represent the views of Educational Testing Service.  相似文献   

3.
Executive function (EF) refers to fundamental capacities that underlie more complex cognition and have ecological relevance across the individual's lifespan. However, emerging executive functions have rarely been studied in young preterm children (age 3) whose critical final stages of fetal development are interrupted by their early birth. We administered four novel touch-screen computerized measures of working memory and inhibition to 369 participants born between 2004 and 2006 (52 Extremely Low Birth Weight [ELBW]; 196 late preterm; 121 term-born). ELBW performed worse than term-born on simple and complex working memory and inhibition tasks and had the highest percentage of incomplete performance on a continuous performance test. The latter finding indicates developmental immaturity and the ELBW group's most at-risk preterm status. Additionally, late-preterm participants performed worse compared with term-born on measures of complex working memory but did not differ from those term-born on response inhibition measures. These results are consistent with a recent literature that identifies often subtle but detectable neurocognitive deficits in late-preterm children. Our results support the development and standardization of computerized touch-screen measures to assess EF subcomponent abilities during the formative preschool period. Such measures may be useful to monitor the developmental trajectory of critical executive function abilities in preterm children, and their use is necessary for timely recognition of deficit and application of appropriate interventional strategies.  相似文献   

4.
Regional cerebral blood flow (rCBF) may be measured with inhalation techniques that use end-expired values of radioactivity to estimate the isotope concentration in arterial blood. These end-expired data are used as an input function in a mathematical equation to derive rCBF. End-expired air is assumed normally to be in equilibrium with the arterial blood at the alveolar surface of the lung during regular (passive) breathing; this assumption may not be valid during continuous phonation. We therefore have analyzed breathing (inhalation/exhalation) patterns and end-expired radioactivity (133Xe) during (1) speaking, (2) singing, and (3) humming of the national anthem, and also during (4) passive breathing. Statistically significant differences in breathing patterns were measured between a group of nonmusicians and two groups of musicians (singers) during the phonation tasks: The nonmusicians breathed more often (and more rapidly) and exhibited less variability in their breathing patterns than did the musicians. Notwithstanding these differences, the shapes of smoothed functions derived from the end-expired values were not influenced appreciably during phonation (except possibly during talking). The latter finding suggests that estimates of rCBF derived with these data should not be confounded seriously because of the continuous phonation.  相似文献   

5.
Executive function (EF) refers to fundamental capacities that underlie more complex cognition and have ecological relevance across the individual's lifespan. However, emerging executive functions have rarely been studied in young preterm children (age 3) whose critical final stages of fetal development are interrupted by their early birth. We administered four novel touch-screen computerized measures of working memory and inhibition to 369 participants born between 2004 and 2006 (52 Extremely Low Birth Weight [ELBW]; 196 late preterm; 121 term-born). ELBW performed worse than term-born on simple and complex working memory and inhibition tasks and had the highest percentage of incomplete performance on a continuous performance test. The latter finding indicates developmental immaturity and the ELBW group's most at-risk preterm status. Additionally, late-preterm participants performed worse compared with term-born on measures of complex working memory but did not differ from those term-born on response inhibition measures. These results are consistent with a recent literature that identifies often subtle but detectable neurocognitive deficits in late-preterm children. Our results support the development and standardization of computerized touch-screen measures to assess EF subcomponent abilities during the formative preschool period. Such measures may be useful to monitor the developmental trajectory of critical executive function abilities in preterm children, and their use is necessary for timely recognition of deficit and application of appropriate interventional strategies.  相似文献   

6.
Regional cerebral blood flow (rCBF) may be measured with inhalation techniques that use end-expired values of radioactivity to estimate the isotope concentration in arterial blood. These end-expired data are used as an input function in a mathematical equation to derive rCBF. End-expired air is assumed normally to be in equilibrium with the arterial blood at the alveolar surface of the lung during regular (passive) breathing; this assumption may not be valid during continuous phonation. We therefore have analyzed breathing (inhalation/exhalation) patterns and end-expired radioactivity (133Xe) during (1) speaking, (2) singing, and (3) humming of the national anthem, and also during (4) passive breathing. Statistically significant differences in breathing patterns were measured between a group of nonmusicians and two groups of musicians (singers) during the phonation tasks: The nonmusicians breathed more often (and more rapidly) and exhibited less variability in their breathing patterns than did the musicians. Notwithstanding these differences, the shapes of smoothed functions derived from the end-expired values were not influenced appreciably during phonation (except possibly during talking). The latter finding suggests that estimates of rCBF derived with these data should not be confounded seriously because of the continuous phonation.  相似文献   

7.
Although previous studies of the foot-in-the-door and the door-in-the-face techniques of interpersonal influence have established the effectiveness of these sequential request strategies, communication researchers have not discovered an adequate conceptual framework for explaining their compliance-enhancing properties. The present study tests the perceptual contrast explanation for sequential request efficacy. Compared with nonsequenced critical requests (i.e., controls), substantially higher compliance with various types of requests was obtained through the use of the foot-in-the-door and the door-in-the-face techniques, but measures of underlying cognitions failed to reveal significant anchoring effects as would be predicted by a perceptual contrast model. Limitations are discussed and suggestions for future research are offered.  相似文献   

8.
A model for multiple-choice exams is developed from a signal-detection perspective. A correct alternative in a multiple-choice exam can be viewed as being a signal embedded in noise (incorrect alternatives). Examinees are assumed to have perceptions of the plausibility of each alternative, and the decision process is to choose the most plausible alternative. It is also assumed that each examinee either knows or does not know each item. These assumptions together lead to a signal detection choice model for multiple-choice exams. The model can be viewed, statistically, as a mixture extension, with random mixing, of the traditional choice model, or similarly, as a grade-of-membership extension. A version of the model with extreme value distributions is developed, in which case the model simplifies to a mixture multinomial logit model with random mixing. The approach is shown to offer measures of item discrimination and difficulty, along with information about the relative plausibility of each of the alternatives. The model, parameters, and measures derived from the parameters are compared to those obtained with several commonly used item response theory models. An application of the model to an educational data set is presented.  相似文献   

9.
Processing capacity--defined as the relative ability to perform mental work in a unit of time--is a critical construct in cognitive psychology and is central to theories of visual attention. The unambiguous use of the construct, experimentally and theoretically, has been hindered by both conceptual confusions and the use of measures that are at best only coarsely mapped to the construct. However, more than 25 years ago, J. T. Townsend and F. G. Ashby (1978) suggested that the hazard function on the response time (RT) distribution offered a number of conceptual advantages as a measure of capacity. The present study suggests that a set of statistical techniques, well-known outside the cognitive and perceptual literatures, offers the ability to perform hypothesis tests on RT-distribution hazard functions. These techniques are introduced, and their use is illustrated in application to data from the contingent attentional capture paradigm.  相似文献   

10.
A key strength of latent curve analysis (LCA) is the ability to model individual variability in rates of change as a function of 1 or more explanatory variables. The measurement of time plays a critical role because the explanatory variables multiplicatively interact with time in the prediction of the repeated measures. However, this interaction is not typically capitalized on in LCA because the measure of time is rather subtly incorporated via the factor loading matrix. The authors' goal is to demonstrate both analytically and empirically that classic techniques for probing interactions in multiple regression can be generalized to LCA. A worked example is presented, and the use of these techniques is recommended whenever estimating conditional LCAs in practice.  相似文献   

11.
12.
The continuous occurrence of particularly severe epileptic attacks represent a great stress for the patients. For this reason, we have conducted anticonvulsant blood-level studies particularly in these patients, this paper being based on 146 first determinations. Among the patients, most of whom are adults and are treated with combinations of drugs, only 24 per cent reached the assumed normal phenytoin range while 69 per cent remained below it. With the administration of phenobarbital, however, 50 per cent reached the therapeutic range and only 14 per cent reached the lower limiting value. It is only i the last-mentioned group that we assumed primarily an insufficient intake of the drugs while with DPH the problems of the determination of the plasma water range had to be discussed. In practice, blood level determinations have their special value for the detection of the intake of a wrong dosage of the drugs as well as bland anticonvulsant intoxications.  相似文献   

13.
It is generally assumed that the latent trait is normally distributed in the population when estimating logistic item response theory (IRT) model parameters. This assumption requires that the latent trait be fully continuous and the population homogenous (i.e., not a mixture). When this normality assumption is violated, models are misspecified, and item and person parameter estimates are inaccurate. When normality cannot be assumed, it might be appropriate to consider alternative modeling approaches: (a) a zero-inflated mixture, (b) a log-logistic, (c) a Ramsay curve, or (d) a heteroskedastic-skew model. The first 2 models were developed to address modeling problems associated with so-called quasi-continuous or unipolar constructs, which apply only to a subset of the population, or are meaningful at one end of the continuum only. The second 2 models were developed to address non-normal latent trait distributions and violations of homogeneity of error variance, respectively. To introduce these alternative IRT models and illustrate their strengths and weaknesses, we performed real data application comparing results to those from a graded response model. We review both statistical and theoretical challenges in applying these models and choosing among them. Future applications of these and other alternative models (e.g., unfolding, diffusion) are needed to advance understanding about model choice in particular situations.  相似文献   

14.
Past studies of accommodation fatigue have yielded inconsistent results, partly because they have not used direct measures of accommodation, and partly because they may have been based on a misleading conception of the nature of accommodation. The dual-innervation theory of accommodation suggests that the resting position of accommodation may be neuromuscular rather than just muscular, and that it lies not at optical infinity, as assumed by older conceptions, but at some intermediate position (dark focus). Among the predictions that may be deduced from this theory is that long-term visual work not requiring active accommodation will not induce fatigue. The present study involved continuous measurements of dark focus for 10 young adults over a 3-h period, using the laser optometer with two psychophysical procedures Ibracketing and staircase. Consistent with the prediction, no changes in dark focus were found, in spite of the demanding visual task. Furthermore, it was found that both psychophysical methods yielded essentially identical results. The practical and theoretical implications of these results are discussed, and recommendations are given regarding situations in which each of the psychophysical methods is likely to be most useful.  相似文献   

15.
The authors review several key areas of early cognitive development in which an abrupt shift in ability at the end of the second year of life has been traditionally assumed. These areas include deferred imitation, self-recognition, language, and categorization. Contrary to much conventional theorizing, the evidence shows robust continuities in all domains of early cognitive development. Where there is evidence of a reorganization of behavior that makes a new level of performance possible, dynamic-systems analyses indicate that even these may be driven by underlying processes that are continuous. Although there remain significant definitional and methodological issues to be resolved, the outcome of this review augers well for newer models in which cognitive development is viewed as a continuous, dynamic process.  相似文献   

16.
The test-taking behaviour of some examinees may be so unusual that their test scores cannot be regarded as appropriate measures of their ability. Appropriateness measurement is a model-based approach to the problem of identifying these test scores. The intuitions and basic theory supporting appropriateness measurement are presented together with a critical review of earlier work and a series of interrelated experiments. We conclude that appropriateness measurement techniques are robust to errors in parameter estimation and robust to the presence of unidentified aberrant examinees in the test norming sample. In addition, the frequently criticized ‘three-parameter logistic’ latent trait model was found to be adequate for the detection of spuriously low scores in actual test data.  相似文献   

17.
18.
One of the most effective mnemonic techniques is the well-known method of loci. Learning and retention, especially of sequentially ordered information, is facilitated by this technique which involves mentally combining salient loci on a well-known path with the material to be learned. There are several variants of this technique that differ in the kind of path that is suggested to the user and it is implicitly assumed that these variants are comparable in effectiveness. The experiments reported in this study were designed to test this assumption. The data of two experiments show that participants who are instructed to generate and apply loci on a route to their work recall significantly more items in a memory test than participants who are instructed to generate and apply loci on a route in their house. These results have practical implications for the instruction and application of the method of loci.  相似文献   

19.
Applied behavior analysis is based on an investigation of variability due to interrelationships among antecedents, behavior, and consequences. This permits testable hypotheses about the causes of behavior as well as for the course of treatment to be evaluated empirically. Such information provides corrective feedback for making data-based clinical decisions. This paper considers how a different approach to the analysis of variability based on the writings of Walter Shewart and W. Edwards Deming in the area of industrial quality control helps to achieve similar objectives. Statistical process control (SPC) was developed to implement a process of continual product improvement while achieving compliance with production standards and other requirements for promoting customer satisfaction. SPC involves the use of simple statistical tools, such as histograms and control charts, as well as problem-solving techniques, such as flow charts, cause-and-effect diagrams, and Pareto charts, to implement Demings management philosophy. These data-analytic procedures can be incorporated into a human service organization to help to achieve its stated objectives in a manner that leads to continuous improvement in the functioning of the clients who are its customers. Examples are provided to illustrate how SPC procedures can be used to analyze behavioral data. Issues related to the application of these tools for making data-based clinical decisions and for creating an organizational climate that promotes their routine use in applied settings are also considered.  相似文献   

20.
Sustained attention is critical for tasks where perceptual information must be continuously processed, like reading or driving; however, the cognitive processes underlying sustained attention remain incompletely characterized. In the experiments that follow, we explore the relationship between sustaining attention and the contents and maintenance of task-relevant features in an attentional template. Specifically, we administered the gradual onset continuous performance task (gradCPT), a sensitive measure of sustained attention, to a large web-based sample (N>20,000) and a smaller laboratory sample for validation and extension. The gradCPT requires participants to respond to most stimuli (city scenes – 90 %) and withhold to rare target images (mountain scenes – 10 %). By using stimulus similarity to probe the representational content of task-relevant features—assuming either exemplar- or category-based feature matching—we predicted that RTs for city stimuli that were more “mountain-like” would be slower and “city-like” mountain stimuli would elicit more erroneous presses. We found that exemplar-based target-nontarget (T-N) similarity predicted both RTs and erroneous button presses, suggesting a stimulus-specific feature matching process was adopted. Importantly, individual differences in the degree of sensitivity to these similarity measures correlated with conventional measures of attentional ability on the gradCPT as well as another CPT that is perceptually less demanding. In other words, individuals with greater sustained attention ability (assessed by two tasks) were more likely to be influenced by stimulus similarity on the gradCPT. These results suggest that sustained attention facilitates the construction and maintenance of an attentional template that is optimal for a given task.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号