首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
In their recent paper, Marchant, Simons, and De Fockert (2013) claimed that the ability to average between multiple items of different sizes is limited by small samples of arbitrarily attended members of a set. This claim is based on a finding that observers are good at representing the average when an ensemble includes only two sizes distributed among all items (regular sets), but their performance gets worse when the number of sizes increases with the number of items (irregular sets). We argue that an important factor not considered by Marchant et al. (2013) is the range of size variation that was much bigger in their irregular sets. We manipulated this factor across our experiments and found almost the same efficiency of averaging for both regular and irregular sets when the range was stabilized. Moreover, highly regular sets consisting only of small and large items (two-peaks distributions) were averaged with greater error than sets with small, large, and intermediate items, suggesting a segmentation threshold determining whether all variable items are perceived as a single ensemble or distinct subsets. Our results demonstrate that averaging can actually be parallel but the visual system has some difficulties with it when some items differ too much from others.  相似文献   

3.
Reference-group effects (discovered in cross-cultural settings) occur when responses to self-report items are based not on respondents’ absolute level of a construct but rather on their level relative to a salient comparison group. In this article, we examine the impact of reference-group effects on the assessment of self-reported personality and attitudes. Two studies illustrate that a reference-group effect can be induced by small changes to instruction sets, changes that mirror the instruction sets of commonly used measures of personality. Scales that specified different reference groups showed substantial reductions in criterion-related validities for academic performance, self-reported counterproductive behaviors, and self-reported health outcomes relative to reference-group-free versions of those scales.  相似文献   

4.
摘要:Q矩阵是认知诊断的基础,错误的Q矩阵会影响参数估计和被试诊断正确率,开发一种简单而有效的Q矩阵估计方法有助于Q矩阵的正确界定。相对于参数化的Q矩阵估计方法,本研究将海明距离(Hamming Distance,HD)用于Q矩阵估计,开发出一种简单有效的非参数化的Q矩阵估计方法。采用Monte Carlo模拟方法与实证研究相结合的研究范式,对该方法的科学性与合理性及其效果进行研究,研究结果发现(1)基于海明距离的Q矩阵估计法具有较高的估计正确率,并且该方法不受被试样本容量影响。(2)该方法简单易懂,运算时间短,是一种简单而有效的Q矩阵估计方法。(3)新方法对于Tatsuka(1990)分数减法测验的Q矩阵的估计准确率尚可,说明新方法在实践中具有较好的潜在应用前景与应用价值。  相似文献   

5.
Many questions in the social sciences reduce to a comparison of mean values across groups in a classical analysis of variance F test. Often the original data my come from a set of items in a questionnaire or personality inventory. When this occurs, some sort of data reduction, combining of items, or scaling procedure is first performed before the hypothesis of no difference in mean values across groups can be made. In many cases, this problem causes undue concern t0 a researcher because the effect of the scoring procedure on the distribution of F is not clear. To help solve this problem, this study was undertaken to investigate whether the method used to calculate scores has any effect on the magnitude of the F ratio in an analysis of variance, for, if it were shown that no statistical difference existd, then a researcher would have some justification for showing the procedure having minimal messes. On the other hand, if statistical differences were b arise because of the kind d scaling procedure employed, then a researcher would have to be more cautious in his choice. For this empirical investigation, Guttman, Saaotor, and simple sum scores were generated using item responses from a large pool of high school seniors. No difference in scoring method was detected when the F ratios resulting from each of the three scoring methods were analyzed. This suggests that, for chin analyses, a simple sum score may be as effective as mres derived by more complicated methods.  相似文献   

6.
7.
When answering questions from memory, respondents strategically control the precision or coarseness of their answers. This grain control process is guided by 2 countervailing aims: to be informative and to be correct. Previously, M. Goldsmith, A. Koriat, and A. Weinberg Eliezer (2002) proposed a satisfying model in which respondents provide the most precise answer that passes a minimum-confidence report criterion. Pointing to social-pragmatic considerations, the present research shows the need to incorporate a minimum-informativeness criterion as well. Unlike its predecessor, the revised, "dual-criterion" model implies a distinction between 2 theoretical knowledge states: Under moderate-to-high levels of satisfying knowledge, a grain size can be found that jointly satisfies both criteria--confidence and informativeness. In contrast, under lower levels of unsatisfying knowledge, the 2 criteria conflict--one cannot be satisfied without violating the other. In support of the model, respondents often violated the confidence criterion in deference to the informativeness criterion, particularly when answering under low knowledge, despite having full control over grain size. Results also suggest a key role for the "don't know" response which, when available, can be used preferentially to circumvent the criterion conflict.  相似文献   

8.
New formulas are developed to give lower bounds to the reliability of a test, whether or not all respondents attempt all items. The formulas apply in particular, then, to completed tests, pure speed tests, pure power tests, and any mixture of speed and power. For the case of completed tests, the formulas give the same answer as certain standard ones; for noncompleted tests the formulas give a correct answer where previous standard formulas are inappropriate. The formulas hold both in the sense of retest reliability and of parallel tests.This research was facilitated by an uncommitted grant-in-aid to the writer from the Behavioral Sciences Division of the Ford Foundation.  相似文献   

9.
Neural network models of memory are notorious for catastrophic interference: Old items are forgotten as new items are memorized (French, 1999; McCloskey & Cohen, 1989). While working memory (WM) in human adults shows severe capacity limitations, these capacity limitations do not reflect neural network style catastrophic interference. However, our ability to quickly apprehend the numerosity of small sets of objects (i.e., subitizing) does show catastrophic capacity limitations, and this subitizing capacity and WM might reflect a common capacity. Accordingly, computational investigations (Knops, Piazza, Sengupta, Eger & Melcher, 2014; Sengupta, Surampudi & Melcher, 2014) suggest that mutual inhibition among neurons can explain both kinds of capacity limitations as well as why our ability to estimate the numerosity of larger sets is limited according to a Weber ratio signature. Based on simulations with a saliency map-like network and mathematical proofs, we provide three results. First, mutual inhibition among neurons leads to catastrophic interference when items are presented simultaneously. The network can remember a limited number of items, but when more items are presented, the network forgets all of them. Second, if memory items are presented sequentially rather than simultaneously, the network remembers the most recent items rather than forgetting all of them. Hence, the tendency in WM tasks to sequentially attend even to simultaneously presented items might not only reflect attentional limitations, but also an adaptive strategy to avoid catastrophic interference. Third, the mean activation level in the network can be used to estimate the number of items in small sets, but it does not accurately reflect the number of items in larger sets. Rather, we suggest that the Weber ratio signature of large number discrimination emerges naturally from the interaction between the limited precision of a numeric estimation system and a multiplicative gain control mechanism.  相似文献   

10.
孟祥斌 《心理科学》2016,39(3):727-734
近年来,项目反应时间数据的建模是心理和教育测量领域的热门方向之一。针对反应时间的对数正态模型和Box-Cox正态模型的不足,本文在van der Linden的分层模型框架下基于偏正态分布建立一个反应时间的对数线性模型,并成功给出模型参数估计的马尔科夫链蒙特卡罗(Markov Chain Monte Carlo, MCMC)算法。模拟研究和实例分析的结果均表明,与对数正态模型和Box-Cox正态模型相比,对数偏正态模型表现出更加优良的拟合效果,具有更强的灵活性和适用性。  相似文献   

11.
Ethics and Nanotechnology: Views of Nanotechnology Researchers   总被引:1,自引:1,他引:0  
Robert McGinn 《Nanoethics》2008,2(2):101-131
A study was conducted of nanotechnology (NT) researchers’ views about ethics in relation to their work. By means of a purpose-built questionnaire, made available on the Internet, the study probed NT researchers’ general attitudes toward and beliefs about ethics in relation to NT, as well as their views about specific NT-related ethical issues. The questionnaire attracted 1,037 respondents from 13 U.S. university-based NT research facilities. Responses to key questionnaire items are summarized and noteworthy findings presented. For most respondents, the ethical responsibilities of NT researchers are not limited to those related to safety and integrity in the laboratory. Most believe that NT researchers also have specific ethical responsibilities to the society in which their research is done and likely to be applied. NT appears to be one of the first areas of contemporary technoscientific activity in which a long-standing belief is being seriously challenged: the belief that society is solely responsible for what happens when a researcher’s work, viewed as neutral and merely enabling, is applied in a particular social context. Survey data reveal that most respondents strongly disagree with that paradigmatic belief. Finally, an index gauging NT researcher sensitivity to ethics and ethical issues related to NT was constructed. A substantial majority of respondents exhibited medium or high levels of sensitivity to ethics in relation to NT. Although most respondents view themselves as not particularly well informed about ethics in relation to NT, a substantial majority are aware of and receptive to ethical issues related to their work, and believe that these issues merit consideration by society and study by current and future NT practitioners.
Robert McGinnEmail:
  相似文献   

12.
According to a recent theory of dyslexia, the perceptual anchor theory, children with dyslexia show deficits in classic auditory and phonological tasks not because they have auditory or phonological impairments but because they are unable to form a ‘perceptual anchor’ in tasks that rely on a small set of repeated stimuli. The theory makes the strong prediction that rapid naming deficits should only be present in small sets of repeated items, not in large sets of unrepeated items. The present research tested this prediction by comparing rapid naming performance of a small set of repeated items with that of a large set of unrepeated items. The results were unequivocal. Deficits were found both for small and large sets of objects and numbers. The deficit was actually bigger for large sets than for small sets, which is the opposite of the prediction made by the anchor theory. In conclusion, the perceptual anchor theory does not provide a satisfactory account of some of the major hallmark effects of developmental dyslexia.  相似文献   

13.
Goodman contributed to the theory of scaling by including a category of intrinsically unscalable respondents in addition to the usual scale-type respondents. However, his formulation permits only error-free responses by respondents from the scale types. This paper presents new scaling models which have the properties that: (1) respondents in the scale types are subject to response errors; (2) a test of significance can be constructed to assist in deciding on the necessity for including an intrinsically unscalable class in the model; and (3) when an intrinsically unscalable class is not needed to explain the data, the model reduces to a probabilistic, rather than to a deterministic, form. Three data sets are analyzed with the new models and are used to illustrate stages of hypothesis testing.  相似文献   

14.
采用三种不同去偏差方法,对不同年龄儿童的预见性偏差及削弱进行探讨。实验结果发现,二年级和三年级出现了预见性偏差,五年级没有出现预见性偏差;对于二年级儿童,只有基于理论的去偏差方法才能削弱预见性偏差。而三年级儿童,三种方法都能削弱预见性偏差。结果说明帮助他们建立正确的元认知理论可以有效提高他们元认知监测水平。  相似文献   

15.
小学生预见性偏差及其削弱   总被引:13,自引:0,他引:13  
张敏  雷开春  张巧明 《心理科学》2005,28(5):1148-1154
采用三种不同去偏差方法,对不同年龄儿童的预见性偏差及削弱进行探讨。实验结果发现,二年级和三年级出现了预见性偏差,五年级没有出现预见性偏差;对于二年级儿童,只有基于理论的去偏差方法才能削弱预见性偏差。而三年级儿童,三种方法都能削弱预见性偏差。结果说明帮助他们建立正确的元认知理论可以有效提高他们元认知监测水平。  相似文献   

16.
针对测验中高能力被试答错容易试题的睡眠现象,可使用四参数Logistic模型分析数据。研究选取了来自心理测验和成就测验的实际数据,分别采用传统模型和四参数Logistic模型进行拟合,对不同模型的拟合指标及参数估计结果进行比较。结果表明,四参数Logistic模型能够提高拟合程度,增强估计结果的准确性,有效纠正高能力被试能力被低估的现象。建议在必要时使用四参数Logistic模型进行数据分析。  相似文献   

17.
Preference data, such as Likert scale data, are often obtained in questionnaire-based surveys. Clustering respondents based on survey items is useful for discovering latent structures. However, cluster analysis of preference data may be affected by response styles, that is, a respondent's systematic response tendencies irrespective of the item content. For example, some respondents may tend to select ratings at the ends of the scale, which is called an ‘extreme response style’. A cluster of respondents with an extreme response style can be mistakenly identified as a content-based cluster. To address this problem, we propose a novel method of clustering respondents based on their indicated preferences for a set of items while correcting for response-style bias. We first introduce a new framework to detect, and correct for, response styles by generalizing the definition of response styles used in constrained dual scaling. We then simultaneously correct for response styles and perform a cluster analysis based on the corrected preference data. A simulation study shows that the proposed method yields better clustering accuracy than the existing methods do. We apply the method to empirical data from four different countries concerning social values.  相似文献   

18.
题目属性的定义是实施认知诊断评价的关键步骤, 通过有丰富经验的领域专家对题目的属性进行定义是当前的主要方法, 然而该方法受到许多主观经验因素的影响。寻找客观的题目属性定义或验证方法可以为主观定义过程提供策略支持或对结果进行改进, 因此已经引起研究者们的关注。本研究构建了一种简单高效的题目属性定义方法, 研究使用似然比D2统计量从作答数据中估计题目属性的方法, 实现属性掌握模式、题目参数和题目属性向量的联合估计。模拟研究结果表明, 使用似然比D2统计量可以有效地识别题目的属性向量, 该方法一方面可以实现新编制题目属性向量的在线估计, 另一方面可以验证已经定义的题目属性向量的准确性。  相似文献   

19.
Responses to items from an intelligence test may be fast or slow. The research issue dealt with in this paper is whether the intelligence involved in fast correct responses differs in nature from the intelligence involved in slow correct responses. There are two questions related to this issue: 1. Are the processes involved different? 2. Are the abilities involved different? An answer to these questions is provided making use of data from a Raven-like matrices test and a verbal analogies test, and the use of a psychometric branching model. The branching model is based on three latent traits: speed, fast accuracy and slow accuracy, and item parameters corresponding to each of these. The pattern of item difficulties is used to draw conclusions on the cognitive processes involved. The results are as follows: 1. The processes involved in fast and slow responses can be differentiated, as can be derived from qualitative differences in the patterns of item difficulty, and fast responses lead to a larger differentiation between items than slow responses do. 2. The abilities underlying fast and slow responses can also be differentiated, and fast responses allow for a better differentiation between the respondents.  相似文献   

20.
This paper assesses framing effects on decision making with internal uncertainty, i.e., partial knowledge, by focusing on examinees' behavior in multiple-choice (MC) tests with different scoring rules. In two experiments participants answered a general-knowledge MC test that consisted of 34 solvable and 6 unsolvable items. Experiment 1 studied two scoring rules involving Positive (only gains) and Negative (only losses) scores. Although answering all items was the dominating strategy for both rules, the results revealed a greater tendency to answer under the Negative scoring rule. These results are in line with the predictions derived from Prospect Theory (PT) [Econometrica 47 (1979) 263]. The second experiment studied two scoring rules, which allowed respondents to exhibit partial knowledge. Under the Inclusion-scoring rule the respondents mark all answers that could be correct, and under the Exclusion-scoring rule they exclude all answers that might be incorrect. As predicted by PT, respondents took more risks under the Inclusion rule than under the Exclusion rule. The results illustrate that the basic process that underlies choice behavior under internal uncertainty and especially the effect of framing is similar to the process of choice under external uncertainty and can be described quite accurately by PT.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号