首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Earlier, we have studied computations possible by physical systems and by algorithms combined with physical systems. In particular, we have analysed the idea of using an experiment as an oracle to an abstract computational device, such as the Turing machine. The theory of composite machines of this kind can be used to understand (a) a Turing machine receiving extra computational power from a physical process, or (b) an experimenter modelled as a Turing machine performing a test of a known physical theory T. Our earlier work was based upon experiments in Newtonian mechanics. Here we extend the scope of the theory of experimental oracles beyond Newtonian mechanics to electrical theory. First, we specify an experiment that measures resistance using a Wheatstone bridge and start to classify the computational power of this experimental oracle using non-uniform complexity classes. Secondly, we show that modelling an experimenter and experimental procedure algorithmically imposes a limit on our ability to measure resistance by the Wheatstone bridge. The connection between the algorithm and physical test is mediated by a protocol controlling each query, especially the physical time taken by the experimenter. In our studies we find that physical experiments have an exponential time protocol; this we formulate as a general conjecture. Our theory proposes that measurability in Physics is subject to laws which are co-lateral effects of the limits of computability and computational complexity.  相似文献   

2.
Tim Maudlin has influentially argued that Humeanism about laws of nature stands in conflict with quantum mechanics. Specifically Humeanism implies the principle Separability: the complete physical state of a world is determined by the intrinsic physical state of each space‐time point. Maudlin argues Separability is violated by the entangled states posited by QM. We argue that Maudlin only establishes that a stronger principle, which we call Strong Separability, is in tension with QM. Separability is not in tension with QM. Moreover, while the Humean requires Separability to capture the core tenets of her view, there's no Humean‐specific motivation for accepting Strong Separability. We go on to give a Humean account of entangled states which satisfies Separability. The core idea is that certain quantum states depend upon the Humean mosaic in much the same way as the laws do. In fact, we offer a variant of the Best System account on which the systemization procedure that generates the laws also serves to ground these states. We show how this account works by applying it to the example of Bohmian Mechanics. The 3N‐dimensional configuration space, the world particle in it and the wave function on it are part of the best system of the Humean mosaic, which consists of N particles moving in 3‐dimensional space. We argue that this account is superior to the Humean account of Bohmian Mechanics defended by Loewer and Albert, which takes the 3N‐dimensional space, and its inhabitants, as fundamental.  相似文献   

3.
A Turing test is proposed to evaluate current computational and associative models of learning, and to guide theoretical developments. This test requires a specification of the procedures to which the model applies, a sampling of procedures and response measures, and an objective way to determine the difficulty of discriminating the responses of the model from the responses of the animal. Scalar timing theory is used as an example of a well-developed computational theory of timing that involves addition, multiplication, division, and sampling. The behavioral theory of timing is used as an example of a well-developed associative theory of timing that involves state transitions and strengthening of connections. A Turing test provides a way to evaluate such theories.  相似文献   

4.
Gary Bartlett 《Erkenntnis》2012,76(2):195-209
Very plausibly, nothing can be a genuine computing system unless it meets an input-sensitivity requirement. Otherwise all sorts of objects, such as rocks or pails of water, can count as performing computations, even such as might suffice for mentality—thus threatening computationalism about the mind with panpsychism. Maudlin in J Philos 86:407–432, (1989) and Bishop (2002a, b) have argued, however, that such a requirement creates difficulties for computationalism about conscious experience, putting it in conflict with the very intuitive thesis that conscious experience supervenes on physical activity. Klein in Synthese 165:141–153, (2008) proposes a way for computationalists about experience to avoid panpsychism while still respecting the supervenience of experience on activity. I argue that his attempt to save computational theories of experience from Maudlin’s and Bishop’s critique fails.  相似文献   

5.
6.
Varma S 《Cognitive Science》2011,35(7):1329-1351
Cognitive architectures are unified theories of cognition that take the form of computational formalisms. They support computational models that collectively account for large numbers of empirical regularities using small numbers of computational mechanisms. Empirical coverage and parsimony are the most prominent criteria by which architectures are designed and evaluated, but they are not the only ones. This paper considers three additional criteria that have been comparatively undertheorized. (a) Successful architectures possess subjective and intersubjective meaning, making cognition comprehensible to individual cognitive scientists and organizing groups of like-minded cognitive scientists into genuine communities. (b) Successful architectures provide idioms that structure the design and interpretation of computational models. (c) Successful architectures are strange: They make provocative, often disturbing, and ultimately compelling claims about human information processing that demand evaluation.  相似文献   

7.
Execution architectures for program algebra   总被引:2,自引:0,他引:2  
We investigate the notion of an execution architecture in the setting of the program algebra PGA, and distinguish two sorts of these: analytic architectures, designed for the purpose of explanation and provided with a process-algebraic, compositional semantics, and synthetic architectures, focusing on how a program may be a physical part of an execution architecture. Then we discuss in detail the Turing machine, a well-known example of an analytic architecture. The logical core of the halting problem—the inability to forecast termination behavior of programs—leads us to a few approaches and examples on related issues: forecasters and rational agents. In particular, we consider architectures suitable to run a Newcomb Paradox system and the Prisoner's Dilemma.  相似文献   

8.
The famous diagonal argument plays a prominent role in set theory as well as in the proof of undecidability results in computability theory and incompleteness results in metamathematics. Lawvere (1969) brings to light the common schema among them through a pretty neat fixpoint theorem which generalizes the diagonal argument behind Cantor’s theorem and characterizes self-reference explicitly in category theory. Not until Yanofsky (2003) rephrases Lawvere’s fixpoint theorem using sets and functions, Lawvere’s work has been overlooked by logicians. This paper will continue Yanofsky’s work, and show more applications of Lawvere’s fixpoint theorem to demonstrate the ubiquity of the theorem. For example, this paper will use it to construct uncomputable real number, unnameable real number, partial recursive but not potentially recursive function, Berry paradox, and fast growing Busy Beaver function. Many interesting lambda fixpoint combinators can also be fitted into this schema. Both Curry’s Y combinator and Turing’s Θ combinator follow from Lawvere’s theorem, as well as their call-by-value versions. At last, it can be shown that the lambda calculus version of the fixpoint lemma also fits Lawvere’s schema.  相似文献   

9.
一个(图灵)理想,是满足两个封闭条件的图灵度集合:向下封闭;任意,中一对图灵度的上确界也在,中。可数理想不仅在图灵度整体性质的研究中有着重要意义,而且在对哥德尔可构成集合L精细结构的早期研究中也发挥过重要作用。研究可数理想的两个重要概念是:恰对和一致上界。借助这两个概念,我们可以将可数理想简化为一个(一致上界)或者一对(恰对)图灵度。通过前人的研究,我们可以发现这两个概念是紧密相连的,同时我们也可以对它们的关系提出进一步的问题。在本文中,我们证明以下定理:任给一个可数理想I,都存在两个I的一致上界a0和a1,同时a0和a1构成,的一个恰对。此定理从正面回答了Lerman提出的关于算术图灵度构成的理想的一个问题。此定理的证明实际上是经过小心修改的、典型的恰对构造。我们在典型恰对构造的过程中,加入一些微妙的限制,使得形成恰对的两个图灵度a0和a1可以各自独立地在一定程度上用逼近的办法还原整个构造,从而分别给出可数理想I的一致枚举。在a0和a1分别的逼近中,我们引入了有穷损坏方法。本文的最后指出a0和a1的图灵跃迁的一些性质。  相似文献   

10.
What are the characteristics of long-term learning? We investigated the characteristics of long-term, symbolic learning using the Soar and ACT-R cognitive architectures running cognitive models of two simple tasks. Long sequences of problems were run collecting data to answer fundamental questions about long-term, symbolic learning. We examined whether symbolic learning continues indefinitely, how the learned knowledge is used, and whether computational performance degrades over the long term. We report three findings. First, in both systems, symbolic learning eventually stopped. Second, learned knowledge was used differently in different stages but the resulting production knowledge was used uniformly. Finally, both Soar and ACT-R do eventually suffer from degraded computational performance with long-term continuous learning. We also discuss ACT-R implementation and theoretic causes of ACT-R’s computational performance problems and settings that appear to avoid the performance problems in ACT-R.  相似文献   

11.
We argue that the set of humanly known mathematical truths (at any given moment in human history) is finite and so recursive. But if so, then given various fundamental results in mathematical logic and the theory of computation (such as Craig’s in J Symb Log 18(1): 30–32(1953) theorem), the set of humanly known mathematical truths is axiomatizable. Furthermore, given Godel’s (Monash Math Phys 38: 173–198, 1931) First Incompleteness Theorem, then (at any given moment in human history) humanly known mathematics must be either inconsistent or incomplete. Moreover, since humanly known mathematics is axiomatizable, it can be the output of a Turing machine. We then argue that any given mathematical claim that we could possibly know could be the output of a Turing machine, at least in principle. So the Lucas-Penrose (Lucas in Philosophy 36:112–127, 1961; Penrose, in The Emperor’s new mind. Oxford University Press, Oxford (1994)) argument cannot be sound.  相似文献   

12.
In this paper, I explore an intriguing view of definable numbers proposed by a Cambridge mathematician Ernest Hobson, and his solution to the paradoxes of definability. Reflecting on König’s paradox and Richard’s paradox, Hobson argues that an unacceptable consequence of the paradoxes of definability is that there are numbers that are inherently incapable of finite definition. Contrast to other interpreters, Hobson analyses the problem of the paradoxes of definability lies in a dichotomy between finitely definable numbers and not finitely definable numbers. To bypass this predicament, Hobson proposes a language dependent analysis of definable numbers, where the diagonal argument is employed as a means to generate more and more definable numbers. This paper examines Hobson’s work in its historical context, and articulates his argument in detail. It concludes with a remark on Hobson’s analysis of definability and Alan Turing’s analysis of computability.  相似文献   

13.
Social information such as observing others can improve performance in decision making. In particular, social information has been shown to be useful when finding the best solution on one’s own is difficult, costly, or dangerous. However, past research suggests that when making decisions people do not always consider other people’s behaviour when it is at odds with their own experiences. Furthermore, the cognitive processes guiding the integration of social information with individual experiences are still under debate. Here, we conducted two experiments to test whether information about other persons’ behaviour influenced people’s decisions in a classification task. Furthermore, we examined how social information is integrated with individual learning experiences by testing different computational models. Our results show that social information had a small but reliable influence on people’s classifications. The best computational model suggests that in categorization people first make up their own mind based on the non-social information, which is then updated by the social information.  相似文献   

14.
Within the program of finding axiomatizations for various parts of computability logic, it was proven earlier that the logic of interactive Turing reduction is exactly the implicative fragment of Heyting’s intuitionistic calculus. That sort of reduction permits unlimited reusage of the computational resource represented by the antecedent. An at least equally basic and natural sort of algorithmic reduction, however, is the one that does not allow such reusage. The present article shows that turning the logic of the first sort of reduction into the logic of the second sort of reduction takes nothing more than just deleting the contraction rule from its Gentzen-style axiomatization. The first (Turing) sort of interactive reduction is also shown to come in three natural versions. While those three versions are very different from each other, their logical behaviors (in isolation) turn out to be indistinguishable, with that common behavior being precisely captured by implicative intuitionistic logic. Among the other contributions of the present article is an informal introduction of a series of new — finite and bounded — versions of recurrence operations and the associated reduction operations. Presented by Robert Goldblatt  相似文献   

15.
16.
New technologies based on artificial agents promise to change the next generation of autonomous systems and therefore our interaction with them. Systems based on artificial agents such as self-driving cars and social robots are examples of this technology that is seeking to improve the quality of people’s life. Cognitive architectures aim to create some of the most challenging artificial agents commonly known as bio-inspired cognitive agents. This type of artificial agent seeks to embody human-like intelligence in order to operate and solve problems in the real world as humans do. Moreover, some cognitive architectures such as Soar, LIDA, ACT-R, and iCub try to be fundamental architectures for the Artificial General Intelligence model of human cognition. Therefore, researchers in the machine ethics field face ethical questions related to what mechanisms an artificial agent must have for making moral decisions in order to ensure that their actions are always ethically right. This paper aims to identify some challenges that researchers need to solve in order to create ethical cognitive architectures. These cognitive architectures are characterized by the capacity to endow artificial agents with appropriate mechanisms to exhibit explicit ethical behavior. Additionally, we offer some reasons to develop ethical cognitive architectures. We hope that this study can be useful to guide future research on ethical cognitive architectures.  相似文献   

17.
In 1950, Alan Turing proposed his eponymous test based on indistinguishability of verbal behavior as a replacement for the question “Can machines think?” Since then, two mutually contradictory but well‐founded attitudes towards the Turing Test have arisen in the philosophical literature. On the one hand is the attitude that has become philosophical conventional wisdom, viz., that the Turing Test is hopelessly flawed as a sufficient condition for intelligence, while on the other hand is the overwhelming sense that were a machine to pass a real live full‐fledged Turing Test, it would be a sign of nothing but our orneriness to deny it the attribution of intelligence. The arguments against the sufficiency of the Turing Test for determining intelligence rely on showing that some extra conditions are logically necessary for intelligence beyond the behavioral properties exhibited by an agent under a Turing Test. Therefore, it cannot follow logically from passing a Turing Test that the agent is intelligent. I argue that these extra conditions can be revealed by the Turing Test, so long as we allow a very slight weakening of the criterion from one of logical proof to one of statistical proof under weak realizability assumptions. The argument depends on the notion of interactive proof developed in theoretical computer science, along with some simple physical facts that constrain the information capacity of agents. Crucially, the weakening is so slight as to make no conceivable difference from a practical standpoint. Thus, the Gordian knot between the two opposing views of the sufficiency of the Turing Test can be cut.  相似文献   

18.
This paper is a philosophical enquiry into the role that mathematics play in the articulation of science. It is conducted, in its essentials, in the spirit of Wittgenstein's views on the nature and function of philosophy, which are to lay bare, as it were, the manner in which we do whatever it is that we do, and then to examine the claims that we make for the deed. My conclusions should be easily accessible to those familiar with his thinking on the subject of science.

The case that has inspired the writing of this paper is not that of biology, nor is it the biological theory of evolution; rather, the case I have kept in mind while writing this paper is that of cognitive science, sometimes presented as a “science of mind” by its practitioners. It is primarily a computational theory characterized by two distinct approaches, one internal, the gist of which is that the brain/mind distinction is definitely passé; the other external, based on the view that the mark of human mentation is to be found in the ordinary use of old expressions to convey new meanings, i.e. in the Cartesian test for the existence of other minds, and its simpler computational version, the Turing test. Two intuitions underlie the paper: one, that language is obviously an adaptive characteristic of human organisms: one learns one's own mother's tongue, and feral children cannot conceptualize if first exposed to language after reaching puberty; two, empirical evidence supports the view that the “knowing brain” is different architecturally from the “untutored” one. These intuitions warrant regarding man's cognitive apparatus as an evolutionary system, and the “mind” as an emergent property.  相似文献   

19.
The Turing Test: the first 50 years   总被引:1,自引:0,他引:1  
The Turing Test, originally proposed as a simple operational definition of intelligence, has now been with us for exactly half a century. It is safe to say that no other single article in computer science, and few other articles in science in general, have generated so much discussion. The present article chronicles the comments and controversy surrounding Turing's classic article from its publication to the present. The changing perception of the Turing Test over the last 50 years has paralleled the changing attitudes in the scientific community towards artificial intelligence: from the unbridled optimism of 1960s to the current realization of the immense difficulties that still lie ahead. I conclude with the prediction that the Turing Test will remain important, not only as a landmark in the history of the development of intelligent machines, but also with real relevance to future generations of people living in a world in which the cognitive capacities of machines will be vastly greater than they are now.  相似文献   

20.
Copeland and others have argued that the Church–Turing thesis (CTT) has been widely misunderstood by philosophers and cognitive scientists. In particular, they have claimed that CTT is in principle compatible with the existence of machines that compute functions above the “Turing limit,” and that empirical investigation is needed to determine the “exact membership” of the set of functions that are physically computable. I argue for the following points: (a) It is highly doubtful that philosophers and cognitive scientists have widely misunderstood CTT as alleged.1 In fact, by and large, computability theorists and mathematical logicians understand CTT in the exact same way. (b) That understanding most likely coincides with what Turing and Church had in mind. Even if it does not, an accurate exegesis of Turing and Church need not dictate how today's working scientists understand the thesis. (c) Even if we grant Copeland's reading of CTT, an orthodox stronger version of it which he rejects (Gandy's thesis) follows readily if we only accept a highly plausible necessary condition for what constitutes a deterministic digital computer. Finally, (d) regardless of whether we accept this condition, the prospects for a scientific theory of hypercomputation are exceedingly poor because physical science does not have the wherewithal to investigate computability or to discover its ultimate “limit.”  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号