首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
Erdelyi MH 《The Behavioral and brain sciences》2006,29(5):499-511; discussion 511-51
Repression has become an empirical fact that is at once obvious and problematic. Fragmented clinical and laboratory traditions and disputed terminology have resulted in a Babel of misunderstandings in which false distinctions are imposed (e.g., between repression and suppression) and necessary distinctions not drawn (e.g., between the mechanism and the use to which it is put, defense being just one). "Repression" was introduced by Herbart to designate the (nondefensive) inhibition of ideas by other ideas in their struggle for consciousness. Freud adapted repression to the defensive inhibition of "unbearable" mental contents. Substantial experimental literatures on attentional biases, thought avoidance, interference, and intentional forgetting exist, the oldest prototype being the work of Ebbinghaus, who showed that intentional avoidance of memories results in their progressive forgetting over time. It has now become clear, as clinicians had claimed, that the inaccessible materials are often available and emerge indirectly (e.g., procedurally, implicitly). It is also now established that the Ebbinghaus retention function can be partly reversed, with resulting increases of conscious memory over time (hypermnesia). Freud's clinical experience revealed early on that exclusion from consciousness was effected not just by simple repression (inhibition) but also by a variety of distorting techniques, some deployed to degrade latent contents (denial), all eventually subsumed under the rubric of defense mechanisms ("repression in the widest sense"). Freudian and Bartlettian distortions are essentially the same, even in name, except for motive (cognitive vs. emotional), and experimentally induced false memories and other "memory illusions" are laboratory analogs of self-induced distortions.  相似文献   

4.
5.
In a unified theory of human reciprocity, the strong and weak forms are similar because neither is biologically altruistic and both require normative motivation to support cooperation. However, strong reciprocity is necessary to support cooperation in public goods games. It involves inflicting costs on defectors; and though the costs for punishers are recouped, recouping costs requires complex institutions that would not have emerged if weak reciprocity had been enough.  相似文献   

6.
Human morality may be thought of as a negative feedback control system in which moral rules are reference values, and moral disapproval, blame, and punishment are forms of negative feedback given for violations of the moral rules. In such a system, if moral agents held each other accountable, moral norms would be enforced effectively. However, even a properly functioning social negative feedback system could not explain acts in which individual agents uphold moral rules in the face of contrary social pressure. Dr. Frances Kelsey, who withheld FDA approval for thalidomide against intense social pressure, is an example of the degree of individual moral autonomy possible in a hostile environment. Such extreme moral autonomy is possible only if there is internal, psychological negative feedback, in addition to external, social feedback. Such a cybernetic model of morality and moral autonomy is consistent with certain aspects of classical ethical theories.  相似文献   

7.
This article presents a unified theory of human reasoning. The goal of the theory is to specify what constitutes reasoning, as opposed to other psychological processes, and to characterize the psychological distinction between inductive and deductive reasoning. The theory views reasoning as the controlled and mediated application of three processes— selective encoding, selective comparison, and selective combination—to inferential rules. The first two of these processes are essentially inductive in nature; the third is essentially deductive. The theory describes these three processes, specifies the kinds of inferential rules and their use in several reasoning tasks, and specifies the mediators that affect how well the processes can be applied to the rules. The theory is shown to apply to a variety of reasoning tasks and is compared to other theories as well.  相似文献   

8.
9.
This article introduces McDonald's unified treatment of test theory, which merges the major contributions of Spearman (True Score Theory and Common Factor Theory) with aspects of Item Response Theory. The fundamentals are first presented. followed by elaboration of selected aspects of the treatment. An SAS program is given that estimates relevant parameters.  相似文献   

10.
11.
12.
The unfinished nature of Beauchamp and Childress’s account of the common morality after 34 years and seven editions raises questions about what is lacking, specifically in the way they carry out their project, more generally in the presuppositions of the classical liberal tradition on which they rely. Their wide-ranging review of ethical theories has not provided a method by which to move beyond a hypothetical approach to justification or, on a practical level regarding values conflict, beyond a questionable appeal to consensus. My major purpose in this paper is to introduce the thought of Bernard Lonergan as offering a way toward such a methodological breakthrough. In the first section, I consider Beauchamp and Childress’s defense of their theory of the common morality. In the second, I relate a persisting vacillation in their argument regarding the relative importance of reason and experience to a similar tension in classical liberal theory. In the third, I consider aspects of Lonergan’s generalized empirical method as a way to address problems that surface in the first two sections of the paper: (1) the structural relation of reason and experience in human action; and (2) the importance of theory for practice in terms of what Lonergan calls “common sense” and “general bias.”  相似文献   

13.
One of the most debated questions in psychology and cognitive science is the nature and the functioning of the mental processes involved in deductive reasoning. However, all existing theories refer to a specific deductive domain, like syllogistic, propositional or relational reasoning.
Our goal is to unify the main types of deductive reasoning into a single set of basic procedures. In particular, we bring together the microtheories developed from a mental models perspective in a single theory, for which we provide a formal foundation. We validate the theory through a computational model (UNICORE) which allows fine-grained predictions of subjects' performance in different reasoning domains.
The performance of the model is tested against the performance of experimental subjects—as reported in the relevant literature—in the three areas of syllogistic, relational and propositional reasoning. The computational model proves to be a satisfactory artificial subject, reproducing both correct and erroneous performance of the human subjects. Moreover, we introduce a developmental trend in the program, in order to simulate the performance of subjects of different ages, ranging from children (3–6) to adolescents (8–12) to adults (>21). The simulation model performs similarly to the subjects of different ages.
Our conclusion is that the validity of the mental model approach is confirmed for the deductive reasoning domain, and that it is possible to devise a unique mechanism able to deal with the specific subareas. The proposed computational model (UNICORE) represents such a unifying structure.  相似文献   

14.
15.
People can use a variety of different strategies to perform tasks and these strategies all have two characteristics in common. First, they can be evaluated in comparison with either an absolute or a relative standard. Second, they can be used at varying levels of consistency. In the present article, the authors develop a general theory of task performance called potential performance theory (PPT) that distinguishes between observed scores and true scores that are corrected for inconsistency (i.e., potential scores). In addition, they argue that any kind of improvement to task performance, whatever it may be, works by influencing either task strategies, which comprise all nonrandom components that are relevant to the task, or the consistency with which strategies are used. In the current study, PPT is used to demonstrate how task strategies and the consistencies with which they are used impact actual performance in the domain of morality. These conclusions are extended to other domains of task performance.  相似文献   

16.
17.
18.
We present interpretation-based processing—a theory of sentence processing that builds a syntactic and a semantic representation for a sentence and assigns an interpretation to the sentence as soon as possible. That interpretation can further participate in comprehension and in lexical processing and is vital for relating the sentence to the prior discourse. Our theory offers a unified account of the processing of literal sentences, metaphoric sentences, and sentences containing semantic illusions. It also explains how text can prime lexical access. We show that word literality is a matter of degree and that the speed and quality of comprehension depend both on how similar words are to their antecedents in the preceding text and how salient the sentence is with respect to the preceding text. Interpretation-based processing also reconciles superficially contradictory findings about the difference in processing times for metaphors and literals. The theory has been implemented in ACT-R [Anderson and Lebiere, The Atomic Components of Thought, Lawrence Erlbaum Associates Publishers, Mahwah, NJ, 1998].  相似文献   

19.
A Garnham 《Cognition》1989,31(1):45-60
This paper presents a unified account of the meaning of the spatial relational terms right, left, in front of, behind, above and below. It claims that each term has three types of meanings, basic, deictic and intrinsic, and that the definitions of each type of meaning are identical in form for all six terms. Restrictions on the use of the terms, which are different for above and below than for the rest, are explained by a general constraint on all uses of spatial relational terms, the framework vertical constraint. This constraint depends on the existence of a fourth type of meaning for above and below, one defined by the framework in which the related objects are located. It is argued that a theory centred on the framework vertical constraint is preferable to one centred on the principle of canonical orientation (Levelt, 1984).  相似文献   

20.
Mael A. Melvin 《Synthese》1982,50(3):359-397
A survey is given of the concepts of interaction (force) and matter, i.e., of process and substance. The development of these concepts, first in antiquity, then in early modern times, and finally in the contemporary system of quantum field theory is described. After a summary of the basic phenomenological attributes (coupling strengths, symmetry quantities, charges), the common ground of concepts of quantum field theory for both interactions and matter entities is discussed. Then attention is focused on the gauge principle which has been developed to describe all interaction fields in the same way, and hopefully to unite them all into one unified field. While a similar unification of all fundamental types of matter fields (quarks and leptons) into one family may be possible (SU 5), there still remains at this level a duality between interaction quanta (bosons with spin 1) and matter particles (fermions with spin 1/2). Whether this duality may be removed in some future supersymmetric theory is not discussed in this paper. Nor is Quantum Gravitation discussed, though the analogy of the gauge principle for the three fundamental non-gravitational interactions (hadronic, electromagnetic and weak) to Einstein's principle of equivalence for gravitation in spacetime is noted. However, the equivalence concept is applied not to spacetime but to the internal spaces for the matter (or charge) fields which are the sources between which the fundamental interactions operate. The gauge principle states that a change in the measures of the internal space charge of gauge or phases of the matter fields is equivalent to, and can be compensated by, suitably introduced interaction fields. From such an interaction field, the gauge potential field in the internal space, one may derive a gauge force field by exterior differentiation.Geometrically, the collection of all internal spaces, one over each point of spacetime, constitutes a fiber bundle. The gauge potential field represents a connection on the fiber bundle, and the gauge force field is the curvature (calculated by taking the exterior derivative of the connection and adding to it the exterior product of the connection with itself). Thus, just as gravitational force is interpreted as spacetime curvature, so the three other fundamental forces are interpretable as internal space curvature. The Standard Model which unites the three non-gravitational fields into an SU c 3 ×SU 2×U 1 structure, and the grand unified model, SU 5, are discussed briefly, and difficulties are noted. Finally it is suggested that a composite model, based on more subtle structure, may be needed to remove the present obscurities and difficulties that stand in the way of a unified theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号