首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
From the perspective of George A. Kelly's Personal construct theory the article outlines points of view concerning "advantages of symptoms" A procedure for eliciting possible advantages is presented ("The ABC-model") and compared with a more conventional psychoanalytic approach as exemplified in a previously published case story. From a Kellyan view people are "personal scientists", seeking—by testing their hypotheses—to predict and control events. Personal scientists may, however, get stuck with their hypotheses, possibly resulting in "symptoms", and may need a fellow scientist (e.g. therapist) to encourage them to see and try out alternative procedures. The authors underscore that the therapist should beware of imposing own interpretations on the clients, but rather invite them to participate in a joint effort to elaborate and understand (analyze). This is a joint process, but principally based on the client's own personal meaning structure (construction). Since there is more than one alternative or one answer to the client's situation, the therapeutic process will probably call for frequent reconstructions.  相似文献   

2.
In the present study, the use of knowledge space theory (KST), jointly with formal concept analysis (FCA), is proposed for developing a formal representation of the relations between the items of a questionnaire and a set of psychodiagnostic criteria. This formal representation can be used to develop an efficient adaptive tool for psychological assessment. Rusch and Wille (1996) have shown some interesting connections between KST and FCA; these connections are applied in the construction of knowledge structures, starting from a formal context representing the relations between items and criteria. The proposed general methodology was applied, as an example, to the Maudsley Obsessional-Compulsive Questionnaire. We used a data set provided by a sample of patients with a diagnosis of obsessive-compulsive disorder to validate the obtained structures. The parameters of the basic local independence model (BLIM) were estimated for the obtained knowledge structures. The fit of each model was tested by parametric bootstrap because of the sparseness of the derived data matrix. The results are discussed in light of both psychological and methodological relapses. In particular, we propose a reinterpretation of the BLIM parameters that seems suitable for testing reliability and construct validity; furthermore, it is pointed out how the obtained structures could represent the starting point for the development of a computerized assessment tool.  相似文献   

3.
Perceptual scientists have recently enjoyed success in constructing mathematical theories for specific perceptual capacities, capacities such as stereovision, auditory localization, and color perception. Analysis of these theories suggests that they all share a common mathematical structure. If this is true, the elucidation of this structure, the study of its properties, the derivation of its consequences, and the empirical testing of its predictions are promising directions for perceptual research. We consider a candidate for the common structure, a candidate called an "observer". Observers, in essence, perform inferences; each observer has a characteristic class of perceptual premises, a characteristic class of perceptual conclusions, and its own functional relationship between these premises and conclusions. If observers indeed capture the structure common to perceptual capacities, then each capacity, regardless of its modality or manner of instantiation, can be described as some observer. In this paper we develop the definition of an observer. We first consider two examples of perceptual capacities: the measurement of visual motion, and the perception of depth from visual motion. In each case, we review a formal theory of the capacity and abstract its structural essence. From this essence we construct the definition of observer. We then exercise the definition in discussions of transduction, perceptual illusions, perceptual uncertainty, regularization theory, the cognitive penetrability of perception, and the theory neutrality of observation.  相似文献   

4.
Paul Meehl's contributions to methodology and the philosophy of science extend well beyond his widely known writings in such areas as construct validity and statistical significance testing. I describe one of Meehl's less well-known, but potentially most important, methodological undertakings: his work on metascience, or the science of science. Metascience could ultimately revolutionize our conceptualizations and understanding of science and provide considerable help to practicing scientists and scientific endeavors, including efforts to advance the development and appraisal of theories in psychology.  相似文献   

5.
A theory of partial knowledge is proposed as an explanation of cognitive development, and methods are described for testing the theory. The theory consists of three structure-process pairs, each of which postulates a type of cognitive structure and a developmental process specific to that type. In restricted knowledge, a unitary algorithm is the cognitive structure, and amendment is the developmental process. In variable sampling, a structure of unitary substitutes is paired with a process of selection. In variable integration, modular components are paired with self-monitoring. Methods for testing the theory form a sequence of mathematical models. The first model in the sequence, called a model of double assessment, is described both verbally and mathematically. Other models in the sequence are described verbally with reference to other articles for the formal mathematics. Also described are some nonmathematical methods to be used as sequels to the double assessment model.  相似文献   

6.


An experiment was constructed testing predictions derived from the mental model theory. According to this theory, individual words in a sentence provide clues to the building of the mental model of the sentence and need to be interpreted in relation to general knowledge of situations similar to those described in the sentence. After reading a sentence, subjects had to produce, as quickly as possible, one aspect of the meaning of a target noun. The sentence either did or did not contain the target noun, and it primed either one aspect of its meaning or no specific aspect of it. The prediction was that the subjects would be faster and more uniform at producing the primed aspect of the target noun after a priming sentence than at producing any aspect of the noun after a non-priming sentence, and this difference would occur regardless of whether the target noun had occurred in the prior sentence. The results, which confirm the predictions, are discussed in relation to current theories of sentence comprehension.  相似文献   

7.
Knowledge partitioning is a theoretical construct holding that knowledge is not always integrated and homogeneous but may be separated into independent parcels containing mutually contradictory information. Knowledge partitioning has been observed in research on expertise, categorization, and function learning. This article presents a theory of function learning (the population of linear experts model--POLE) that assumes people partition their knowledge whenever they are presented with a complex task. The authors show that POLE is a general model of function learning that accommodates both benchmark results and recent data on knowledge partitioning. POLE also makes the counterintuitive prediction that a person's distribution of responses to repeated test stimuli should be multimodal. The authors report 3 experiments that support this prediction.  相似文献   

8.
9.
Robert B. Glassman 《Zygon》2007,42(3):651-676
Formalizing a “psychology of science” today will constrain intellectual freedom in ways more likely stultifying than liberating. We should be more improvisational in seeking ideas from academic psychology to develop a more comprehensive purview. I suggest that a psychology of science should look at systematic theology and empirical theology. Liberal theologians have long experience trying to distill from religion those structural aspects that affirm openness in a search for truth. Science, as well as religion, has its myths and rituals, but theologians are more experienced than scientists at a large mythohistorical scale. There are distortions in the extreme degree to which psychological science has traditionally emphasized empiricism, positivism, hypothesis testing, and falsifiability. I argue for less critical reduction and more creative augmentation. This could include looking outside academia at cognitive competencies of people in trades. Exaggerated parsimony is an old story. This is illustrated by the opposition to David Hartley's 1749 theory of neural oscillations. There is an inexorable “margin of uncertainty” where scientific prediction and control can never outstrip the new uses to which human beings put ideas. Facts and values interact in this margin; theology has long made a home there, but scientists sometimes have been excessive in rejecting the “naturalistic fallacy.” There is also often a degree of disingenuousness in psychology's reluctance to take subjective phenomena seriously; here there may be lessons in how empirical theology has handled subjectivity, as well as in taking an honest look at the way much of the methodology of experimental psychology incorporates subjective assessments. Feist's book is a start, but these things need more thought before codifying a psychology of science.  相似文献   

10.
Science is the construction and testing of systems that bind symbols to sensations according to rules. Material implication is the primary rule, providing the structure of definition, elaboration, delimitation, prediction, explanation, and control. The goal of science is not to secure truth, which is a binary function of accuracy, but rather to increase the information about data communicated by theory. This process is symmetric and thus entails an increase in the information about theory communicated by data. Important components in this communication are the elevation of data to the status of facts, the descent of models under the guidance of theory, and their close alignment through the evolving retroductive process. The information mutual to theory and data may be measured as the reduction in the entropy, or complexity, of the field of data given the model. It may also be measured as the reduction in the entropy of the field of models given the data. This symmetry explains the important status of parsimony (how thoroughly the data exploit what the model can say) alongside accuracy (how thoroughly the model represents what can be said about the data). Mutual information is increased by increasing model accuracy and parsimony, and by enlarging and refining the data field under purview.  相似文献   

11.
传统的最小二乘回归法关注于对当前数据集的准确估计, 容易导致模型的过拟合, 影响模型结论的可重复性。随着方法学领域的发展, 涌现出的新兴统计工具可以弥补传统方法的局限, 从过度关注回归系数值的解释转向提升研究结果的预测能力也愈加成为心理学领域重要的发展趋势。Lasso方法通过在模型估计中引入惩罚项的方式, 可以获得更高的预测准确度和模型概化能力, 同时也可以有效地处理过拟合和多重共线性问题, 有助于心理学理论的构建和完善。  相似文献   

12.
To solve the problem brought from enormous policy documents and complex management in the social security domain, the article uses ontology as the way of representing and storing knowledge. The article first constructs the framework of ontology through manual work so that it can ensure the relative accuracy of the ontology structure. Then it achieves the automatic ontology expansion based on the inclusion relationship of property sets or operational object sets. The article uses a semi-automatic method that extracts hierarchical concepts and non-hierarchical concepts from domain thesaurus by using the method combining statistics with rules to construct the ontology. Besides constructing the ontology, the article proposes the concepts of concept phrase vector model and high frequency characteristics phrase vector model. The experiment result indicates that ontology semi-automatic construction process can help experts to construct the social security ontology effectively oriented on massive policy documents and is a considerable reference for the construction of ontology in other domains.  相似文献   

13.
Measurements of domain knowledge very often use and report Cronbach's alpha or similar indicators of internal consistency for test construction. In this short article, we argue that this approach is often at odds with the theoretical conception of knowledge underlying the measure. While domain knowledge is usually described as a formative construct (formed by the manifest observations) theoretically, the use of Cronbach's alpha to construct and evaluate an empirical measure implies a reflective model (the construct reflects in manifest behaviors). After illustrating the difference between reflective and formative models, we illustrate how this mismatch between theoretical conception and empirical operationalization can have substantial implications for the assessment and modeling of domain knowledge. Specifically, the construct may be operationalized too narrowly or even be misinterpreted by applying criteria for item selection that focus on homogeneity such as Cronbach's alpha. Rather than maximizing items internal consistency, researchers constructing measures of domain knowledge should, therefore, make strong arguments for the theoretical merit of their items even if they are not correlated to each other.  相似文献   

14.
We oppose Rychlak's (1991a, 1991b) claim that the view of mind entailed in artificial intelligence (AD and cognitive psychology is fundamentally at odds with Kelly's (1955) personal construct theory. Kelly's model and Al have much in common: They both are centrally concerned with representation, cognitive processes and their structure, and are ultimately empirical in their methodology. Many Al researchers have usefully embraced personal construct theory as a working conceptual framework, in this article, we examine Rychlak's assertions and identify several mistakes.  相似文献   

15.
Assertion is fundamental to our lives as social and cognitive beings. By asserting, we share knowledge, coordinate behavior, and advance collective inquiry. Accordingly, assertion is of considerable interest to cognitive scientists, social scientists, and philosophers. This paper advances our understanding of the norm of assertion. Prior evidence suggests that knowledge is the norm of assertion, a view known as “the knowledge account.” In its strongest form, the knowledge account says that knowledge is both necessary and sufficient for assertability: you should make an assertion if and only if you know that it is true. The knowledge account has been rejected on the grounds that it conflicts with our ordinary practice of evaluating assertions. This paper reports four experiments that address an important objection of this sort, which focuses on a class of examples known as “Gettier cases.” The results undermine the objection and, in the process, provide further evidence for the knowledge account. The findings also teach some important general lessons about intuitional methodology and the curation of genres of thought experiment.  相似文献   

16.
Traditional Null Hypothesis Testing procedures are poorly adapted to theory testing. The methodology can mislead researchers in several ways, including: (a) a lack of power can result in an erroneous rejection of the theory; (b) the focus on directionality (ordinal tests) rather than more precise quantitative predictions limits the information gained; and (c) the misuse of probability values to indicate effect size. An alternative approach is proposed which involves employing the theory to generate explicit effect size predictions that are compared to the effect size estimates and related confidence intervals to test the theoretical predictions. This procedure is illustrated employing the Transtheoretical Model. Data from a sample ( N = 3,967) of smokers from a large New England HMO system were used to test the model. There were a total of 15 predictions evaluated, each involving the relation between Stage of Change and one of the other 15 Transtheoretical Model variables. For each variable, omega-squared and the related confidence interval were calculated and compared to the predicted effect sizes. Eleven of the 15 predictions were confirmed, providing support for the theoretical model. Quantitative predictions represent a much more direct, informative, and strong test of a theory than the traditional test of significance.  相似文献   

17.
The ability to evaluate scientific claims and evidence is an important aspect of scientific literacy and requires various epistemic competences. Readers spontaneously validate presented information against their knowledge and beliefs but differ in their ability to strategically evaluate the soundness of informal arguments. The present research investigated how students of psychology, compared to scientists working in psychology, evaluate informal arguments. Using a think-aloud procedure, we identified the specific strategies students and scientists apply when judging the plausibility of arguments and classifying common argumentation fallacies. Results indicate that students, compared to scientists, have difficulties forming these judgements and base them on intuition and opinion rather than the internal consistency of arguments. Our findings are discussed using the mental model theory framework. Although introductory students validate scientific information against their knowledge and beliefs, their judgements are often erroneous, in part because their use of strategy is immature. Implications for systematic trainings of epistemic competences are discussed.  相似文献   

18.
There is a pervasive sense of unease among social scientists concerning the status of social research. This unease is rooted partly in a false dichotomy between objectivity and subjectivity and a belief that an idealized positivist version of classical physics should be the model for all sciences. Experimental methodology is one of many valid ways of obtaining knowledge and carries with its use a particular set of problems, particularly when social phenomena are studied. A reconceptualization of social research is needed, in which experimental and quasi-experimental methods are used with more caution and are supplemented by a more thorough conceptual apparatus.  相似文献   

19.
The Kantian revolution limited the possibility of ontological knowledge, severing subject from thing as is evident in its legacy in both continental and analytic philosophy. Consequently, if a thing cannot be known as it is, the philosophical status of empirical science as a study about existing natural things should be called into question. It could be construed, for instance, that a scientific theory is a construction about something to which the subjective constructor can never have ontological access. But, when empirical scientists develop evidence-based proofs for their theories the assumption of realism usually stands: scientific theories constructed by scientists are actually purported to represent natural entities back to these constructing scientists. Given that there is a danger of philosophy becoming isolated from empirical science, we attempt to bridge the gap between philosophical discourse and science-in-praxis through a recapitulation of Aquinas’ ontological epistemology. Aquinas argued for a clarified realism in which the epistemic is construed as an intersection between the thinking subject and the object. Contrary to naïve realism, then, it will be explicated how Aquinas’ realism was a precursor of “critical realism”, as he discerned the complex interaction of thinking subject and the being of the object as both bearing on the production of knowledge.  相似文献   

20.
Reasoning about relations   总被引:6,自引:0,他引:6  
Inferences about spatial, temporal, and other relations are ubiquitous. This article presents a novel model-based theory of such reasoning. The theory depends on 5 principles. (a) The structure of mental models is iconic as far as possible. (b) The logical consequences of relations emerge from models constructed from the meanings of the relations and from knowledge. (c) Individuals tend to construct only a single, typical model. (d) They spontaneously develop their own strategies for relational reasoning. (e) Regardless of strategy, the difficulty of an inference depends on the process of integration of the information from separate premises, the number of entities that have to be integrated to form a model, and the depth of the relation. The article describes computer implementations of the theory and presents experimental results corroborating its main principle.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号