首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5篇
  免费   1篇
  2022年   1篇
  2019年   1篇
  2018年   1篇
  2015年   1篇
  2010年   1篇
  2006年   1篇
排序方式: 共有6条查询结果,搜索用时 15 毫秒
1
1.
Four experiments are presented that explore situations in which a decision maker has to rely on personal experience in an attempt to minimize delays. Experiment 1 shows that risk-attitude in these timesaving decisions is similar to risk-attitude in money-related decisions from experience: A risky prospect is more attractive than a safer prospect with the same expected value only when it leads to a better outcome most of the time. Experiment 2 highlights a boundary condition: It suggests that a difficulty in ranking the relevant delays moves behavior toward random choice. Experiments 3 and 4 show that when actions must be taken during the delay (thereby helping compare delays), this increases the similarity of timesaving decisions to money-related decisions. In these settings the results reflect an increase in risk aversion with experience. The relationship of the results to the study of non-human time-related decisions, human money-related decisions and human time perception is discussed.  相似文献   
2.
3.
Elber-Dorozko  Lotem  Shagrir  Oron 《Synthese》2019,199(1):43-66

It is generally accepted that, in the cognitive and neural sciences, there are both computational and mechanistic explanations. We ask how computational explanations can integrate into the mechanistic hierarchy. The problem stems from the fact that implementation and mechanistic relations have different forms. The implementation relation, from the states of an abstract computational system (e.g., an automaton) to the physical, implementing states is a homomorphism mapping relation. The mechanistic relation, however, is that of part/whole; the explaining features in a mechanistic explanation are the components of the explanandum phenomenon and their causal organization. Moreover, each component in one level of mechanism is constituted and explained by components of an underlying level of mechanism. Hence, it seems, computational variables and functions cannot be mechanistically explained by the medium-dependent states and properties that implement them. How then, do the computational and the implementational integrate to create the mechanistic hierarchy? After explicating the general problem (Sect. 2), we further demonstrate it through a concrete example, of reinforcement learning, in the cognitive and neural sciences (Sects. 3 and 4). We then examine two possible solutions (Sect. 5). On one solution, the mechanistic hierarchy embeds at the same levels computational and implementational properties. This picture fits with the view that computational explanations are mechanistic sketches. On the other solution, there are two separate hierarchies, one computational and another implementational, which are related by the implementation relation. This picture fits with the view that computational explanations are functional and autonomous explanations. It is less clear how these solutions fit with the view that computational explanations are full-fledged mechanistic explanations. Finally, we argue that both pictures are consistent with the reinforcement learning example, but that scientific practice does not align with the view that computational models are merely mechanistic sketches (Sect. 6).

  相似文献   
4.
Lotem Elber-Dorozko 《Synthese》2018,195(12):5319-5337
A popular view presents explanations in the cognitive sciences as causal or mechanistic and argues that an important feature of such explanations is that they allow us to manipulate and control the explanandum phenomena. Nonetheless, whether there can be explanations in the cognitive sciences that are neither causal nor mechanistic is still under debate. Another prominent view suggests that both causal and non-causal relations of counterfactual dependence can be explanatory, but this view is open to the criticism that it is not clear how to distinguish explanatory from non-explanatory relations. In this paper, I draw from both views and suggest that, in the cognitive sciences, relations of counterfactual dependence that allow manipulation and control can be explanatory even when they are neither causal nor mechanistic. Furthermore, the ability to allow manipulation can determine whether non-causal counterfactual dependence relations are explanatory. I present a preliminary framework for manipulation relations that includes some non-causal relations and use two examples from the cognitive sciences to show how this framework distinguishes between explanatory and non-explanatory, non-causal relations. The proposed framework suggests that, in the cognitive sciences, causal and non-causal relations have the same criterion for explanatory value, namely, whether or not they allow manipulation and control.  相似文献   
5.
We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural‐language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed in this manner takes the form of a directed weighted graph, whose nodes are recursively (hierarchically) defined patterns over the elements of the input stream. We evaluated the model in seventeen experiments, grouped into five studies, which examined, respectively, (a) the generative ability of grammar learned from a corpus of natural language, (b) the characteristics of the learned representation, (c) sequence segmentation and chunking, (d) artificial grammar learning, and (e) certain types of structure dependence. The model's performance largely vindicates our design choices, suggesting that progress in modeling language acquisition can be made on a broad front—ranging from issues of generativity to the replication of human experimental findings—by bringing biological and computational considerations, as well as lessons from prior efforts, to bear on the modeling approach.  相似文献   
6.
Ben-Oren  Yotam  Truskanov  Noa  Lotem  Arnon 《Animal cognition》2022,25(6):1545-1555
Animal Cognition - Based on past experience, food-related-cues can help foragers to predict the presence and the expected quality of food. However, when the food is already visible there is no need...  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号