Investigating explanations in conditional and highly automated driving: The effects of situation awareness and modality |
| |
Affiliation: | 1. Insurance Institute for Highway Safety, United States;2. Massachusetts Institute of Technology AgeLab, New England University Transportation Center, United States;1. Department of Sociology, Anthropology and Criminal Justice, Clemson University, Clemson, SC, United States;2. School of Civil and Environmental Engineering, Georgia Institute of Technology, Atlanta, GA, United States;1. School of Automotive Engineering, Dalian University of Technology, Dalian 116024, China;2. Ningbo Institute of Dalian University of Technology, No.26 Yucai Road, Jiangbei District, Ningbo 315016, China;3. College of Electromechanical & Information Engineering, Dalian Minzu University, Dalian 116600, China;1. Centre for Accident Research and Road Safety-Queensland (CARRS-Q), Institute of Health and Biomedical Innovation (IHBI), Queensland University of Technology, Kelvin Grove, Queensland 4059, Australia;2. Seeing Machines Ltd., Fyshwick, ACT, Australia;1. Department of Industrial and Manufacturing Systems Engineering, University of Michigan, Dearborn, MI, USA;2. Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, Ann Arbor, MI, USA |
| |
Abstract: | With the level of automation increases in vehicles, such as conditional and highly automated vehicles (AVs), drivers are becoming increasingly out of the control loop, especially in unexpected driving scenarios. Although it might be not necessary to require the drivers to intervene on most occasions, it is still important to improve drivers’ situation awareness (SA) in unexpected driving scenarios to improve their trust in and acceptance of AVs. In this study, we conceptualized SA at the levels of perception (SA L1), comprehension (SA L2), and projection (SA L3), and proposed an SA level-based explanation framework based on explainable AI. Then, we examined the effects of these explanations and their modalities on drivers’ situational trust, cognitive workload, as well as explanation satisfaction. A three (SA levels: SA L1, SA L2 and SA L3) by two (explanation modalities: visual, visual + audio) between-subjects experiment was conducted with 340 participants recruited from Amazon Mechanical Turk. The results indicated that by designing the explanations using the proposed SA-based framework, participants could redirect their attention to the important objects in the traffic and understand their meaning for the AV system. This improved their SA and filled the gap of understanding the correspondence of AV’s behavior in the particular situations which also increased their situational trust in AV. The results showed that participants reported the highest trust with SA L2 explanations, although the mental workload was assessed higher in this level. The results also provided insights into the relationship between the amount of information in explanations and modalities, showing that participants were more satisfied with visual-only explanations in the SA L1 and SA L2 conditions and were more satisfied with visual and auditory explanations in the SA L3 condition. Finally, we found that the cognitive workload was also higher in SA L2, possibly because the participants were actively interpreting the results, consistent with a higher level of situational trust. These findings demonstrated that properly designed explanations, based on our proposed SA-based framework, had significant implications for explaining AV behavior in conditional and highly automated driving. |
| |
Keywords: | Explanations Situation awareness Modality Automated driving |
本文献已被 ScienceDirect 等数据库收录! |
|