首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7篇
  免费   1篇
  2022年   1篇
  2021年   1篇
  2020年   1篇
  2019年   1篇
  2017年   1篇
  2013年   1篇
  2009年   1篇
  2001年   1篇
排序方式: 共有8条查询结果,搜索用时 31 毫秒
1
1.
2.
Shown commensurate actions and information by an adult, preschoolers’ causal learning was influenced by the pedagogical context in which these actions occurred. Four-year-olds who were provided with a reason for an experimenter’s action relevant to learning causal structure showed more accurate causal learning than children exposed to the same action and data accompanied by an inappropriate rationale or accompanied by no explanatory information. These results suggest that children’s accurate causal learning is influenced by contextual factors that specify the instructional value of others’ actions.  相似文献   
3.
In multicausal abductive tasks a person must explain some findings by assembling a composite hypothesis that consists of one or more elementary hypotheses. If there are n elementary hypotheses, there can be up to 2n composite hypotheses. To constrain the search for hypotheses to explain a new observation, people sometimes use their current explanation—the previous evidence and their present composite hypothesis of that evidence; however, it is unclear when and how the current explanation is used. In addition, although a person's current explanation can narrow the search for a hypothesis, it can also blind the problem solver to alternative, possibly better, explanations. This paper describes a model of multicausal abductive reasoning that makes two predictions regarding the use of the current explanation. The first prediction is that the current explanation is not used to explain new evidence if there is a simple (i.e., nondisjunctive, concrete) hypothesis to account for that evidence. The second prediction is that the current explanation is used when attempting to discriminate among several alternative hypotheses for new evidence. These hypotheses were tested in three experiments. The results are consistent with the second prediction: the current explanation is used when discriminating among alternative hypotheses. However, the first prediction—that the current explanation is not used when a simple hypothesis can account for new data—received only limited support. Participants used the current explanation to constrain their interpretation of new data in 46.5% of all trials. This suggests that context-independent strategies compete with context-dependent ones—an interpretation that is consistent with recent work on strategy selection during problem solving.  相似文献   
4.
How do reasoners deal with inconsistencies? James (1907) believed that the rational solution is to revise your beliefs and to do so in a minimal way. We propose an alternative: You explain the origins of an inconsistency, which has the side effect of a revision to your beliefs. This hypothesis predicts that individuals should spontaneously create explanations of inconsistencies rather than refute one of the assertions and that they should rate explanations as more probable than refutations. A pilot study showed that participants spontaneously explain inconsistencies when they are asked what follows from inconsistent premises. In three subsequent experiments, participants were asked to compare explanations of inconsistencies against minimal refutations of the inconsistent premises. In Experiment 1, participants chose which conclusion was most probable; in Experiment 2 they rank ordered the conclusions based on their probability; and in Experiment 3 they estimated the mean probability of the conclusions' occurrence. In all three studies, participants rated explanations as more probable than refutations. The results imply that individuals create explanations to resolve an inconsistency and that these explanations lead to changes in belief. Changes in belief are therefore of secondary importance to the primary goal of explanation.  相似文献   
5.
With the level of automation increases in vehicles, such as conditional and highly automated vehicles (AVs), drivers are becoming increasingly out of the control loop, especially in unexpected driving scenarios. Although it might be not necessary to require the drivers to intervene on most occasions, it is still important to improve drivers’ situation awareness (SA) in unexpected driving scenarios to improve their trust in and acceptance of AVs. In this study, we conceptualized SA at the levels of perception (SA L1), comprehension (SA L2), and projection (SA L3), and proposed an SA level-based explanation framework based on explainable AI. Then, we examined the effects of these explanations and their modalities on drivers’ situational trust, cognitive workload, as well as explanation satisfaction. A three (SA levels: SA L1, SA L2 and SA L3) by two (explanation modalities: visual, visual + audio) between-subjects experiment was conducted with 340 participants recruited from Amazon Mechanical Turk. The results indicated that by designing the explanations using the proposed SA-based framework, participants could redirect their attention to the important objects in the traffic and understand their meaning for the AV system. This improved their SA and filled the gap of understanding the correspondence of AV’s behavior in the particular situations which also increased their situational trust in AV. The results showed that participants reported the highest trust with SA L2 explanations, although the mental workload was assessed higher in this level. The results also provided insights into the relationship between the amount of information in explanations and modalities, showing that participants were more satisfied with visual-only explanations in the SA L1 and SA L2 conditions and were more satisfied with visual and auditory explanations in the SA L3 condition. Finally, we found that the cognitive workload was also higher in SA L2, possibly because the participants were actively interpreting the results, consistent with a higher level of situational trust. These findings demonstrated that properly designed explanations, based on our proposed SA-based framework, had significant implications for explaining AV behavior in conditional and highly automated driving.  相似文献   
6.
Young children show competence in reasoning about how ownership affects object use. In the present experiments, we investigate how influential ownership is for young children by examining their explanations. In three experiments, we asked 3‐ to 5‐year‐olds (N = 323) to explain why it was acceptable (Experiments 1–3) or unacceptable (Experiment 2 and 3) for a person to use an object. In Experiments 1 and 2, older preschoolers referenced ownership more than alternative considerations when explaining why it was acceptable or unacceptable for a person to use an object, even though ownership was not mentioned to them. In Experiment 3, ownership was mentioned to children. Here, younger preschoolers frequently referenced ownership when explaining unacceptability of using an object, but not when explaining why using it was acceptable. These findings suggest that ownership is influential in preschoolers' explanations about the acceptability of using objects, but that the scope of its influence increases with age.  相似文献   
7.
Despite technological advances, trust still remains as a major issue facing autonomous vehicles. Existing studies have reported that explanations of the status of automation systems can be an effective strategy to increase trust, but these effects can differ depending on the forms of explanations and autonomous driving situations. To address this issue, this study examines the effects of explanation types and perceived risk on trust in autonomous vehicles. Three types of explanations (i.e., no, simple, and attributional explanations) are designed based on attribution theory. Additionally, four autonomous driving situations with different levels of risk are designed based on a simulator program. Results show that explanation type significantly affects trust in autonomous vehicles, and the perceived risk of driving situations significantly moderates the effect of the explanation type. At a high level of perceived risk, attributional explanations and no explanations lead to the lowest and highest values in trust, respectively. However, at a low level of perceived risk, these effects reverse.  相似文献   
8.
Creating explanations is an important process for students, not only to make connections between novel information and background knowledge, but also to be able to communicate their understanding of any given topic. This article explores students’ explanations in the context of computational science and engineering, an important interdisciplinary field that enables scientists and engineers to solve complex problems. Specifically, this study explores: (a) students’ approaches to create written explanations of programing code and (b) the relationship between students’ explanations and their ability to do computer programing. Students wrote in-code comments for 3 MATLAB® worked-examples, which were qualitatively analyzed using a coding scheme. Different approaches to self-explain were identified using hierarchical cluster analysis, and differences in students’ ability to do computer programing were identified using analysis of variance. The resulting approaches to self-explain were: original solution, mechanistic, principle-based, limited, and goal-based. The findings suggest that experienced students wrote simple in-code comments to self-explain, but students with lower ability to do computer programing wrote more comprehensive explanations, as they may take this as a learning opportunity.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号