首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
    
Theories concerning the structure, or format, of mental representation should (1) be formulated in mechanistic, rather than metaphorical terms; (2) do justice to several philosophical intuitions about mental representation; and (3) explain the human capacity to predict the consequences of worldly alterations (i.e., to think before we act). The hypothesis that thinking involves the application of syntax‐sensitive inference rules to syntactically structured mental representations has been said to satisfy all three conditions. An alternative hypothesis is that thinking requires the construction and manipulation of the cognitive equivalent of scale models. A reading of this hypothesis is provided that satisfies condition (1) and which, even though it may not fully satisfy condition (2), turns out (in light of the frame problem) to be the only known way to satisfy condition (3).  相似文献   

2.
    
The ability to combine words into novel sentences has been used to argue that humans have symbolic language production abilities. Critiques of connectionist models of language often center on the inability of these models to generalize symbolically (Fodor & Pylyshyn, 1988; Marcus, 1998). To address these issues, a connectionist model of sentence production was developed. The model had variables (role‐concept bindings) that were inspired by spatial representations (Landau & Jackendoff, 1993). In order to take advantage of these variables, a novel dual‐pathway architecture with event semantics is proposed and shown to be better at symbolic generalization than several variants. This architecture has one pathway for mapping message content to words and a separate pathway that enforces sequencing constraints. Analysis of the model's hidden units demonstrated that the model learned different types of information in each pathway, and that the model's compositional behavior arose from the combination of these two pathways. The model's ability to balance symbolic and statistical behavior in syntax acquisition and to model aphasic double dissociations provided independent support for the dual‐pathway architecture.  相似文献   

3.
    
Plausibility has been implicated as playing a critical role in many cognitive phenomena from comprehension to problem solving. Yet, across cognitive science, plausibility is usually treated as an operationalized variable or metric rather than being explained or studied in itself. This article describes a new cognitive model of plausibility, the Plausibility Analysis Model (PAM), which is aimed at modeling human plausibility judgment. This model uses commonsense knowledge of concept-coherence to determine the degree of plausibility of a target scenario. In essence, a highly plausible scenario is one that fits prior knowledge well: with many different sources of corroboration, without complexity of explanation, and with minimal conjecture. A detailed simulation of empirical plausibility findings is reported, which shows a close correspondence between the model and human judgments. In addition, a sensitivity analysis demonstrates that PAM is robust in its operations.  相似文献   

4.
    
The natural input memory (NIM) model is a new model for recognition memory that operates on natural visual input. A biologically informed perceptual preprocessing method takes local samples (eye fixations) from a natural image and translates these into a feature-vector representation. During recognition, the model compares incoming preprocessed natural input to stored representations. By complementing the recognition memory process with a perceptual front end, the NIM model is able to make predictions about memorability based directly on individual natural stimuli. We demonstrate that the NIM model is able to simulate experimentally obtained similarity ratings and recognition memory for individual stimuli (i.e., face images).  相似文献   

5.
基于"物理符号系统假设"的传统人工智能采信了低阶结构不连续的思想,将概念化与概念的语义基础分离,把对思维过程的模拟看作是能用形式化方法来实现的。但事实证明,这种后验性方法存在理论和实现上的双重危机,要完成有实际意义的、有创新性的智能行为丰富的语义是必需的,这使语义问题成为人工智能不同应用分支中的焦点,它包括语义获取、表达和使用三个方面。而要实现对语义问题的认识和解决就必须把它和其它智能行为作为一个连续的、相关的、不可分割的认知结构进行完整地考察,以系统的观点来看待智能模型的构造问题。这个认知结构统一性的基石就是基于神经生理基础的、以知觉的心理生理学解释为依据的、对语义的直接表达,这为统一以神经系统动力学为模型的其它各种智能行为提供了基础。  相似文献   

6.
    
We have developed a process model that learns in multiple ways while finding faults in a simple control panel device. The model predicts human participants' learning through its own learning. The model's performance was systematically compared to human learning data, including the time course and specific sequence of learned behaviors. These comparisons show that the model accounts very well for measures such as problem-solving strategy, the relative difficulty of faults, and average fault-finding time. More important, because the model learns and transfers its learning across problems, it also accounts for the faster problem-solving times due to learning when examined across participants, across faults, and across the series of 20 trials on an individual participant basis. The model shows how learning while problem solving can lead to more recognition-based performance, and helps explain how the shape of the learning curve can arise through learning and be modified by differential transfer. Overall, the quality of the correspondence appears to have arisen from procedural, declarative, and episodic learning all taking place within individual problem-solving episodes.  相似文献   

7.
    
Explanations of cognitive processes provided by traditional artificial intelligence were based on the notion of the knowledge level. This perspective has been challenged by new AI that proposes an approach based on embodied systems that interact with the real‐world. We demonstrate that these two views can be unified. Our argument is based on the assumption that knowledge level explanations can be defined in the context of Bayesian theory while the goals of new AI are captured by using a well established robot based model of learning and problem solving, called Distributed Adaptive Control (DAC). In our analysis we consider random foraging and we prove that minor modifications of the DAC architecture renders a model that is equivalent to a Bayesian analysis of this task. Subsequently, we compare this enhanced, “rational,” model to its “non‐rational” predecessor and a further control condition using both simulated and real robots, in a variety of environments. Our results show that the changes made to the DAC architecture, in order to unify the perspectives of old and new AI, also lead to a significant improvement in random foraging.  相似文献   

8.
    
Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure‐Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity‐based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large‐scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n2log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large‐scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before.  相似文献   

9.
Kent Johnson 《Synthese》2007,156(2):253-279
The empirical nature of our understanding of language is explored. I first show that there are several important and different distinctions between tacit and accessible awareness. I then present empirical evidence concerning our understanding of language. The data suggests that our awareness of sentence-meanings is sometimes merely tacit according to one of these distinctions, but is accessible according to another. I present and defend an interpretation of this mixed view. The present project is shown to impact on several diverse areas, including inferential role semantics and holism, the nature of learning, and the role of linguistics in the law. I am indebted to a number of people for their useful feedback, especially Peter Ludlow, Paul Pietroski, and two anonymous reviewers. Earlier versions of this paper were presented at an Eastern meeting of the APA, a meeting of the Society for Exact Philosophy at Simon Fraser University, and at a semantics workshop in Ottawa, Canada. I greatly appreciate the comments from those audiences.  相似文献   

10.
11.
The article picks up some ideas that Ann Taves presents in her book Religious Experience Reconsidered, and looks at possible conversations that are not fleshed out in detail in Taves’ book. In particular, it is argued that the disciplinary confrontation with philosophy and with historiography is of crucial importance if the disciplines of cognitive science and psychology of religion want to become in the future what they pretend to be now—a serious alternative and complement to the study of religion as we know it from other contexts, such as cultural studies and historiography.  相似文献   

12.
The article picks up some ideas that Ann Taves presents in her book Religious Experience Reconsidered, and looks at possible conversations that are not fleshed out in detail in Taves' book. In particular, it is argued that the disciplinary confrontation with philosophy and with historiography is of crucial importance if the disciplines of cognitive science and psychology of religion want to become in the future what they pretend to be nowda serious alternative and complement to the study of religion as we know it from other contexts, such as cultural studies and historiography  相似文献   

13.
    
Emerging parallel processing and increased flexibility during the acquisition of cognitive skills form a combination that is hard to reconcile with rule-based models that often produce brittle behavior. Rule-based models can exhibit these properties by adhering to 2 principles: that the model gradually learns task-specific rules from instructions and experience, and that bottom-up processing is used whenever possible. In a model of learning perfect time-sharing in dual tasks (Schumacher et al., 2001), speedup learning and bottom-up activation of instructions can explain parallel behavior. In a model of a complex dynamic task (Carnegie Mellon University Aegis Simulation Program [CMU-ASP], Anderson et al., 2004), parallel behavior is explained by the transition from serially organized instructions to rules that are activated by both top-down (goal-driven) and bottom-up (perceptually driven) factors. Parallelism lets the model opportunistically reorder instructions, leading to the gradual emergence of new task strategies.  相似文献   

14.
Since Pascal introduced the idea of mathematical probability in the 17th century discussions of uncertainty and “rational” belief have been dogged by philosophical and technical disputes. Furthermore, the last quarter century has seen an explosion of new questions and ideas, stimulated by developments in the computer and cognitive sciences. Competing ideas about probability are often driven by different intuitions about the nature of belief that arise from the needs of different domains (e.g., economics, management theory, engineering, medicine, the life sciences etc). Taking medicine as our focus we develop three lines of argument (historical, practical and cognitive) that suggest that traditional views of probability cannot accommodate all the competing demands and diverse constraints that arise in complex real-world domains. A model of uncertain reasoning based on a form of logical argumentation appears to unify many diverse ideas. The model has precursors in informal discussions of argumentation due to Toulmin, and the notion of logical probability advocated by Keynes, but recent developments in artificial intelligence and cognitive science suggest ways of resolving epistemological and technical issues that they could not address.  相似文献   

15.
Hale J 《Cognitive Science》2006,30(4):643-672
A word-by-word human sentence processing complexity metric is presented. This metric formalizes the intuition that comprehenders have more trouble on words contributing larger amounts of information about the syntactic structure of the sentence as a whole. The formalization is in terms of the conditional entropy of grammatical continuations, given the words that have been heard so far. To calculate the predictions of this metric, Wilson and Carroll's (1954) original entropy reduction idea is extended to infinite languages. This is demonstrated with a mildly context-sensitive language that includes relative clauses formed on a variety of grammatical relations across the Accessibility Hierarchy of Keenan and Comrie (1977). Predictions are derived that correlate significantly with repetition accuracy results obtained in a sentence-memory experiment (Keenan & Hawkins, 1987).  相似文献   

16.
Meehl’s article related to theoretical risks and tabular asterisks reminds us of the importance of thinking and theorizing as well as hypothesizing and testing, a point that is as relevant today in soft psychology as it was in 1978. Although progress is impeded by the profusion of tabular asterisks, we should recognize as well that the course of events is shaped by the actors involved—theories not only fade away, they are excluded by these actors.  相似文献   

17.
We begin by distinguishing computationalism from a number of other theses that are sometimes conflated with it. We also distinguish between several important kinds of computation: computation in a generic sense, digital computation, and analog computation. Then, we defend a weak version of computationalism—neural processes are computations in the generic sense. After that, we reject on empirical grounds the common assimilation of neural computation to either analog or digital computation, concluding that neural computation is sui generis. Analog computation requires continuous signals; digital computation requires strings of digits. But current neuroscientific evidence indicates that typical neural signals, such as spike trains, are graded like continuous signals but are constituted by discrete functional elements (spikes); thus, typical neural signals are neither continuous signals nor strings of digits. It follows that neural computation is sui generis. Finally, we highlight three important consequences of a proper understanding of neural computation for the theory of cognition. First, understanding neural computation requires a specially designed mathematical theory (or theories) rather than the mathematical theories of analog or digital computation. Second, several popular views about neural computation turn out to be incorrect. Third, computational theories of cognition that rely on non‐neural notions of computation ought to be replaced or reinterpreted in terms of neural computation.  相似文献   

18.
A modestly generic, innovative, problem solving process with roots in the study of design and scientific research problem solving is presented and motivated. It is argued to be the shared core process of all problem solving. At its heart is a recognition of five foci or nodes of change vital to the process (changes in problem and solution formulation, method, constraints, and partial solution proposals) together with a bootstrap marked by the formation of higher order knowledge about problem solving in the domain in tandem with the solving of specific problems, the essential feature of all learned improvement. None of these elements is entirely original, but the way they are made explicit and developed (rather than folded into fewer, more abstract, boxes) is argued to provide fresh understanding of the organisation and power of the process to deal with complex practical problems.  相似文献   

19.
Karl Schweizer   《Intelligence》2007,35(6):591-604
The impurity of measures is considered as cause of erroneous interpretations of observed relationships. This paper concentrates on impurity with respect to the relationship between working memory and fluid intelligence. The means for the identification of impurity was the fixed-links model, which enabled the decomposition of variance into experimental and non-experimental parts. A substantial non-experimental part could be expected to signify impurity. In a sample of 345 participants error scores and reaction times, which were obtained by the Exchange Test, represented working memory, and Advanced Progressive Matrices served as measure of fluid intelligence. The four independent latent variables of the model associated with error scores and reaction times led to a multiple correlation .67 with the latent variable of fluid intelligence. However, there was impurity since the decomposition by means of the fixed-links model showed that only 45% of the common variance was due to working memory.  相似文献   

20.
There exist various guidelines for facilitating the design, preparation, and deployment of accessible eLearning applications and contents. However, such guidelines prevalently address accessibility in a rather technical sense, without giving sufficient consideration to the cognitive aspects and issues related to the use of eLearning materials by learners with disabilities. In this paper we describe how a user-centered design process was applied to develop a method and set of guidelines for didactical experts to scaffold their creation of accessible eLearning content, based on a more sound approach to accessibility. The paper also discusses possible design solutions for tools supporting eLearning content authors in the adoption and application of the proposed approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号