首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper discusses the item selection problem when the item responses follow a linear multiple factor model. Because of this restrictive assumption, not too unrealistic in situations such as mental testing, it is possible to select optimal sets of items without going through all possible combinations. A method proposed by Elfving to accomplish this is analyzed and then demonstrated through the use of two illustrations. The common and often used procedure of observing the magnitude of the correlation coefficient as an index in item selection is shown to have some merit in the single-factor case.Work performed under contract AF 41(657)-244 with the School of Aviation Medicine, Randolph AFB, Texas.  相似文献   

2.
In ‘Knowledge, Certainty and Probability’, Dr. Heidelberger claims to have shown ‘that it is a mistake to assimilate probability and rational belief to knowledge’. The conclusion may be true but his argument is faulty.  相似文献   

3.
Sinuosity is a measure of how much a travelled pathway deviates from a straight line. In this paper, sinuosity is applied to the measurement of mood. The Affect Grid is a mood scale that requires participants to place a mark on a 9 x 9 grid to indicate their current mood. The grid has two dimensions: pleasure-displeasure (horizontal) and arousal-sleepiness (vertical). In studies where repeated measurements are required, some participants may exaggerate their mood shifts due to faulty interpretation of the scale or a feeling of social obligation to the experimenter. A new equation is proposed, based on the sinuosity measure in hydrology, a measure of the meandering of rivers. The equation takes into account an individual's presumed tendency to exaggerate and meander to correct the score and reduce outliers. The usefulness of the equation is demonstrated by applying it to Affect Grid data from another study.  相似文献   

4.
In this paper it is shown that under the random effects generalized partial credit model for the measurement of a single latent variable by a set of polytomously scored items, the joint marginal probability distribution of the item scores has a closed-form expression in terms of item category location parameters, parameters that characterize the distribution of the latent variable in the subpopulation of examinees with a zero score on all items, and item-scaling parameters. Due to this closed-form expression, all parameters of the random effects generalized partial credit model can be estimated using marginal maximum likelihood estimation without assuming a particular distribution of the latent variable in the population of examinees and without using numerical integration. Also due to this closed-form expression, new special cases of the random effects generalized partial credit model can be identified. In addition to these new special cases, a slightly more general model than the random effects generalized partial credit model is presented. This slightly more general model is called the extended generalized partial credit model. Attention is paid to maximum likelihood estimation of the parameters of the extended generalized partial credit model and to assessing the goodness of fit of the model using generalized likelihood ratio tests. Attention is also paid to person parameter estimation under the random effects generalized partial credit model. It is shown that expected a posteriori estimates can be obtained for all possible score patterns. A simulation study is carried out to show the usefulness of the proposed models compared to the standard models that assume normality of the latent variable in the population of examinees. In an empirical example, some of the procedures proposed are demonstrated.  相似文献   

5.
Eriksen and Steffy (1964) were critical of short-term visual storage effects (STVS) because they were unable to find any evidence for them. In their first experiment they found instead great interference in performance over the same ISIs in which STVS is usually found. Their second experiment eliminated the interference by avoiding a bright second flash, but it still produced no evidence for STVS. Keele and Chase (1967) demonstrated that Eriksen and Steffy’s second condition failed to find STVS because the memory load was too small, and perhaps because the luminance was too low. However, the design of Eriksen and Steffy’s second experiment is not the one typically used to find STVS. Eriksen and Steffy’s first experiment was replicated here, and a second condition was added in which each S was also tested in a light adapted version. Interference was found in the former, as Eriksen and Steffy also found, but not in the latter. Little STVS was found in either of these conditions, presumably for reasons similar to those demonstrated by Keele and Chase.  相似文献   

6.
Paller KA  Lucas HD  Voss JL 《Trends in cognitive sciences》2012,16(6):313-5; discussion 315-6
Familiarity is sometimes associated with midfrontal old/new (FN400) signals, but investigators assume too much by inferring familiarity whenever they identify these signals. We describe how Rosburg and colleagues (2011) made this assumption, yielding potentially faulty conclusions about the recognition heuristic. We provide an alternative interpretation emphasizing implicit processing that can underlie decision-making.  相似文献   

7.
Subjunctivitis is the doctrine that what is distinctive about knowledge is essential modal in character, and thus is captured by certain subjunctive conditionals. One principal formulation of subjunctivism invokes a ``sensitivity condition' (Nozick, De Rose), the other invokes a ``safety condition' (Sosa). It is shown in detail how defects in the sensitivity condition generate unwanted results, and that the virtues of that condition are merely apparent. The safety condition is untenable also, because it is too easily satisfied. A powerful motivation for adopting subjunctivism would be that it provides a solution to the problem of misleading evidence, but in fact, it does not.  相似文献   

8.
This paper discusses the epistemological and methodological bases of a scientific theory of meaning and proposes a detailed version of a formal theory of argumentation based on Anscombre and Ducrot's conception. Argumentation is shown to be a concept which is not exclusively pragmatic, as it is usually believed, but has an important semantic body. The bridge between the semantic and pragmatic aspects of argumentation consists in a set of gradual inference rules, called topoi, on which the argumentative movement is based. The content of each topos is determined at the pragmatic level, while the constraints on the forms of the topoi attached to a sentence are determined at the semantic level. Applications and possible applications toartificial intelligence and to cognitive sciences are discussed. In particular, the gradual models used to account for argumentation are shown to be extremely promising for Knowledge management, a discipline which includes knowledge acquisition, knowledge representation, transmission of knowledge (communication, interfaces, etc.), knowledge production (decision help, reasoning, etc.). A first formal model is presented and discussed: it is shown in details how it accounts for most of the argumentative features of sentences containing but, little and a little, and how it can be extended to describe sentences containing other argumentative connectives. However, this model is shown to be too simple and to violate the compositionality principle, which is shown, in the first section, to bean important methodological principle for any scientific theory. After a detailed analysis of the possible reasons for this violation, an improved model is proposed and its adequacy is discussed.  相似文献   

9.
A multinormal partial credit model for factor analysis of polytomously scored items with ordered response categories is derived using an extension of the Dutch Identity (Holland in Psychometrika 55:5?C18, 1990). In the model, latent variables are assumed to have a multivariate normal distribution conditional on unweighted sums of item scores, which are sufficient statistics. Attention is paid to maximum likelihood estimation of item parameters, multivariate moments of latent variables, and person parameters. It is shown that the maximum likelihood estimates can be found without the use of numerical integration techniques. More general models are discussed which can be used for testing the model, and it is shown how models with different numbers of latent variables can be tested against each other. In addition, multi-group extensions are proposed, which can be used for testing both measurement invariance and latent population differences. Models and procedures discussed are demonstrated in an empirical data example.  相似文献   

10.
The literature on insight problems—problems that supposedly can only be solved by rejection of an initial faulty problem representation and sudden comprehension of another, nonobvious representation (restructuring)—suggests that the size of initial representations affects the very process of problem solving. Large initial representations impose systematic, analytical search, whereas only small representations promote intuitive, associative processes assumed by some theorists to underpin insight. In a group of 353 young healthy participants, 6 previously validated insight problems were applied in either a small or large initial representation variant. Results demonstrated no reliable difference in performance between the problem variants with regard to (a) solution accuracy, (b) self-reported insight accompanying solutions, (c) effects of fatigue, (d) correlations with another 6 small representation-size problems, and (e) correlations with working memory capacity (which were notable). This outcome suggests that the size of initial faulty representation plays no role in insight problem solving process, supporting the account assuming its strong similarity to systematic, analytical problem solving.  相似文献   

11.
Several philosophers have maintained in recent years that the endurance/perdurance debate is merely verbal: these prima facie distinct theories of objects’ persistence are in fact metaphysically equivalent, they claim. The present paper challenges this view. Three proposed translation schemes (those set forth by Miller in Erkenntnis 62:91–117, 2005, McCall and Lowe in Noûs 40:570–578, 2006, and Hirsch in Metametaphysics—new essays on the foundations of ontology. Oxford University Press, Oxford, 2009) are examined; all are shown to be faulty. In the process, constructive reasons for regarding the debate as a substantive one are provided. It is also suggested that the theories may have differing practical implications.  相似文献   

12.
13.
The insightful problem-solving process has been proposed to involve three main phases: an initial representation phase, in which the solver inappropriately represents the problem; an initial search through the faulty problem space that may lead to impasse; and a postimpasse restructuring phase. Some theories propose that the restructuring phase involves controlled search processes, whereas other theories propose that restructuring is achieved through the automatic redistribution of activation in long-term memory. In this study, we used correlations between working memory (WM) span measures and problemsolving success to test the predictions of these different theories. One group of participants received a set of insight problems that allowed for a large initial faulty search space, whereas another group received a matched set that constrained the initial faulty search space in order to isolate the restructuring phase of the insightful process. The results suggest that increased ability to control attention (as measured by WM span tasks) predicts an individual’s ability to successfully solve problems that involve both the initial search phase and the restructuring phase. However, individual differences in ability to control attention do not predict success on problems that isolate the restructuring phase. These results are interpreted as supporting an automatic-redistribution-of-activation account of restructuring.  相似文献   

14.
Decisions between multiple alternatives typically conform to Hick’s Law: Mean response time increases log-linearly with the number of choice alternatives. We recently demonstrated context effects in Hick’s Law, showing that patterns of response latency and choice accuracy were different for easy versus difficult blocks. The context effect explained previously observed discrepancies in error rate data and provided a new challenge for theoretical accounts of multialternative choice. In the present article, we propose a novel approach to modeling context effects that can be applied to any account that models the speed–accuracy trade-off. The core element of the approach is “optimality” in the way an experimental participant might define it: minimizing the total time spent in the experiment, without making too many errors. We show how this approach can be included in an existing Bayesian model of choice and highlight its ability to fit previous data as well as to predict novel empirical context effects. The model is shown to provide better quantitative fits than a more flexible heuristic account.  相似文献   

15.
In this research, we investigated the process of preparing strategies for performing choice-reaction tasks. Before each choice-reaction trial, subjects were shown a cue that indicated features of the stimulus-response mapping to be used on the upcoming trial. Subjects used this cue to specify their strategy for responding to the stimulus. The time needed for specifying the strategy was measured by allowing subjects to control the cue presentation and surreptitiously recording how long they spent looking at the cue. The experiments demonstrated that the time to prepare a strategy was a function of the number and nature of the strategy features that had to be specified; simple uncertainty about the possible strategies had little direct effect. The results discon-firmed a serial model in which the time to prepare a strategy is the sum of the times to specify the individual strategy features. A mixed serial-parallel model was proposed as an alternative.  相似文献   

16.
Language deficits and the theory of syntax   总被引:9,自引:2,他引:7  
A new structural account of agrammatism is proposed, which analyzes the deficit in terms of one current theory of syntax. First, the motivation for accounts of this kind is given. Then, a variety of experimental findings from sentence comprehension in agrammatism are examined and accounted for in a unified way. It is shown that a minimal change in the syntactic model (achieved by imposing a special condition on a construct called trace), results in a model which accounts for all the data at hand. A number of possible objections to this proposal is then examined, and reasons are given to dismiss these objections. Also, it is shown that this proposal is preferable to other structural accounts which have been recently proposed. Finally, the empirical consequences of this account are discussed, with a special emphasis on the implications for models of language processing.  相似文献   

17.
Shame,shame     
The word shame, as discussed in the literature, is too general and vague. It should thus be restricted to problems caused by (1) faulty toilet training; (2) the consistent use of humiliation as a form of discipline; and (3) public humiliation. Therapists need to be both active in identifying shame, and in intervening therapeutically since patients tend to hide it. Group therapy along with individual therapy is especially helpful in reversing effects of public humiliation.  相似文献   

18.
In this essay, I attempt to assess Henk de Regt and Dennis Dieks recent pragmatic and contextual account of scientific understanding on the basis of an important historical case-study: understanding in Newton’s theory of universal gravitation and Huygens’ reception of universal gravitation. It will be shown that de Regt and Dieks’ Criterion for the Intelligibility of a Theory (CIT), which stipulates that the appropriate combination of scientists’ skills and intelligibility-enhancing theoretical virtues is a condition for scientific understanding, is too strong. On the basis of this case-study, it will be shown that scientists can understand each others’ positions qualitatively and quantitatively, despite their endorsement of different worldviews and despite their convictions as what counts as a proper explanation.  相似文献   

19.
The multinomial (Dirichlet) model, derived from de Finetti's concept of exchangeability, is proposed as a general Bayesian framework to test axioms on data, in particular, deterministic axioms characterizing theories of choice or measurement. For testing, the proposed framework does not require a deterministic axiom to be cast in a probabilistic form (e.g., casting deterministic transitivity as weak stochastic transitivity). The generality of this framework is demonstrated through empirical tests of 16 different axioms, including transitivity, consequence monotonicity, segregation, additivity of joint receipt, stochastic dominance, coalescing, restricted branch independence, double cancellation, triple cancellation, and the Thomsen condition. The model generalizes many previously proposed methods of axiom testing under measurement error, is analytically tractable, and provides a Bayesian framework for the random relation approach to probabilistic measurement (J. Math. Psychol. 40 (1996) 219). A hierarchical and nonparametric generalization of the model is discussed.  相似文献   

20.
基于依恋"内部工作模式"的稳定性,研究者们认为成人依恋具有跨时间的连续性,可以通过不同时间测得的依恋风格的稳定性来考察。采用各种自陈式问卷和半结构访谈方法的研究显示,在成人发展的不同时期,成人依恋风格均具有中等程度的稳定性。这样的结果暗含了成人依恋也有波动和变化的空间,研究者们提出生活压力、社会认知和个体差异三种模型对成人依恋的变化进行了不同角度的解释。成人依恋稳定性方面已经积累了二十多年的研究成果,但在研究对象的范围、解释模型的建构和研究方法的创新等方面仍存在可改进的空间。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号