首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
With the level of automation increases in vehicles, such as conditional and highly automated vehicles (AVs), drivers are becoming increasingly out of the control loop, especially in unexpected driving scenarios. Although it might be not necessary to require the drivers to intervene on most occasions, it is still important to improve drivers’ situation awareness (SA) in unexpected driving scenarios to improve their trust in and acceptance of AVs. In this study, we conceptualized SA at the levels of perception (SA L1), comprehension (SA L2), and projection (SA L3), and proposed an SA level-based explanation framework based on explainable AI. Then, we examined the effects of these explanations and their modalities on drivers’ situational trust, cognitive workload, as well as explanation satisfaction. A three (SA levels: SA L1, SA L2 and SA L3) by two (explanation modalities: visual, visual + audio) between-subjects experiment was conducted with 340 participants recruited from Amazon Mechanical Turk. The results indicated that by designing the explanations using the proposed SA-based framework, participants could redirect their attention to the important objects in the traffic and understand their meaning for the AV system. This improved their SA and filled the gap of understanding the correspondence of AV’s behavior in the particular situations which also increased their situational trust in AV. The results showed that participants reported the highest trust with SA L2 explanations, although the mental workload was assessed higher in this level. The results also provided insights into the relationship between the amount of information in explanations and modalities, showing that participants were more satisfied with visual-only explanations in the SA L1 and SA L2 conditions and were more satisfied with visual and auditory explanations in the SA L3 condition. Finally, we found that the cognitive workload was also higher in SA L2, possibly because the participants were actively interpreting the results, consistent with a higher level of situational trust. These findings demonstrated that properly designed explanations, based on our proposed SA-based framework, had significant implications for explaining AV behavior in conditional and highly automated driving.  相似文献   

2.
To prompt the use of driving automation in an appropriate and safe manner, system designers require knowledge about the dynamics of driver trust. To enhance this knowledge, this study manipulated prior information of a partial driving automation into two types (detailed and less) and investigated the effects of the information on the development of trust with respect to three trust attributions proposed by Muir (1994): predictability, dependability, and faith. Furthermore, a driving simulator generated two types of automation failures (limitation and malfunction), and at six instances during the study, 56 drivers completed questionnaires about their levels of trust in the automation. Statistical analysis found that trust ratings of automation steadily increased with the experience of simulation regardless of the drivers’ levels of knowledge. Automation failure led to a temporary decrease in trust ratings; however, the trust was rebuilt by a subsequent experience of flawless automation. Results showed that dependability was the most dominant belief of drivers’ trust throughout the whole experiment, regardless of their knowledge level. Interestingly, detailed analysis indicated that trust can be accounted by different attributions depending on the drivers’ circumstances: the subsequent experience of error-free automation after the exposure to automation failure led predictability to be a secondary predictive attribution of drivers’ trust in the detailed group whilst faith was consistently the secondary contributor to shaping trust in the less group throughout the experiment. These findings have implications for system design regarding transparency and for training methods and instruction aimed at improving driving safety in traffic environments with automated vehicles.  相似文献   

3.
Prior studies of automated driving have focused on drivers’ evaluations of advanced driving assistance systems and their knowledge of the technology. An on-road experiment with novice drivers who had never used automated systems was conducted to examine the effects of the automation on the driving experience. Participants drove a Tesla Model 3 sedan with level 2 automation engaged or not engaged on a 4-lane interstate freeway. They reported that driving was more enjoyable and less stressful during automated driving than manual driving. They also indicated that they were less anxious and nervous, and able to relax more with the automation. Their intentions to use and purchase automated systems in the future were correlated with the favorableness of their automated driving experiences. The positive experiences of the first-time users suggest that consumers may not need a great deal of persuading to develop an appreciation for partially automated vehicles.  相似文献   

4.
As in-vehicle voice agents increase in popularity, related research is extending to how voice messages can affect the driver’s cognitive and emotional states. Accordingly, we investigated how in-vehicle agent (IVA) voice dominance and driving automation can affect the driver’s situation awareness (SA), emotion regulation (ER), and trust. To this end, a lab-based experiment was conducted with a medium-fidelity driving simulator using actor-recorded voice agents. Twenty-two female and nineteen male driver-licensed participants were recruited to drive simulated vehicles with voice agents and evaluated. The results demonstrated that compared with the dominant voice, the agent with a submissive voice significantly increased ER in both manual and automated driving. Furthermore, the submissive voice significantly increased trust in automated driving compared with the dominant agent. Cross and synergistic interaction effects exist between voice dominance and driving automation in SA and ER, respectively. This study revealed that both the content of the messages of the IVAs and their voice characteristics are essential for modulating the driver’s SA, ER, and trust in driving. It is expected that larger-scale future studies with simulation or on a real road would increase the validity of this study.  相似文献   

5.
Within the context of more and more autonomous vehicles, an automatic lateral control device (AS: Automatic Steering) was used to steer the vehicle along the road without drivers’ intervention. The device was not able to detect and avoid obstacles. The experiment aimed to analyse unexpected obstacle avoidance manoeuvres when lateral control was delegated to automation. It was hypothesized that drivers skirting behaviours and eye movement patterns would be modified with automated steering compared with a control situation without automation. Eighteen participants took part in a driving simulator study. Steering behaviours and eye movements were analysed during obstacle avoidance episodes. Compared with driving without automation, skirting around obstacles was found to be less effective when drivers had to return from automatic steering to manual control. Eye movements were modified in the presence of automatic steering, revealing further ahead visual scanning of the driving environment. Resuming manual control is not only a problem of action performance but is also related to the reorganisation of drivers’ visual strategies linked to drivers’ disengagement from the steering task. Assistance designers should pay particular attention to potential changes in drivers’ activity when carrying out development work on highly automated vehicles.  相似文献   

6.
The present study investigated the attitudes and acceptance of automated shuttles in public transport among 340 individuals physically experiencing the automated shuttle ‘Emily’ from Easymile in a mixed traffic environment on the semi-public EUREF (Europäisches Energieforum) campus in Berlin. Automated vehicle acceptance was modelled as a function of the Unified Theory of Acceptance and Use of Technology (UTAUT) constructs performance expectancy, effort expectancy, social influence, and facilitating conditions, the Diffusion of Innovation Theory (DIT) constructs compatibility and trialability, as well as trust and automated shuttle sharing. The results show that after adding the DIT constructs, automated shuttle sharing, and trust to the model, the effect of performance expectancy on behavioural intention was no longer significant. Instead, compatibility with current travel was the strongest predictor of behavioural intention to use automated shuttles. It was further found that individuals who are willing to share rides in an automated shuttle with fellow travelers (i.e., automated shuttle sharing) and who trust automated shuttles (i.e., trust) are more likely to intend to use automated shuttles (i.e., behavioural intention). The highest mean rating was obtained for believing that automated shuttles are easy to use, while the lowest mean rating was obtained for feeling safe inside the automated shuttle without any type of supervision. The analysis revealed a preference for the supervision of the automated shuttle via an external control room to the supervision by a human steward onboard. We recommend future research to investigate the hypothesis that compatibility could serve as an even stronger predictor of the behavioural intention to use automated shuttles in public transport than performance expectancy.  相似文献   

7.
Previous studies indicate that, if an automated vehicle communicates its system status and intended behaviour, it could increase user trust and acceptance. However, it is still unclear what types of interfaces will better portray this type of information. The present study evaluated different configurations of screens comparing how they communicated the possible hazards in the environment (e.g. vulnerable road users), and vehicle behaviours (e.g. intended trajectory). These interfaces were presented in a fully automated vehicle tested by 25 participants in an indoor arena. Surveys and interviews measured trust, usability and experience after users were driven by an automated low-speed pod. Participants experienced four types of interfaces, from a simple journey tracker to a windscreen-wide augmented reality (AR) interface which overlays hazards highlighted in the environment and the trajectory of the vehicle. A combination of the survey and interview data showed a clear preference for the AR windscreen and an animated representation of the environment. The trust in the vehicle featuring these interfaces was significantly higher than pretrial measurements. However, some users questioned if they want to see this information all the time. One additional result was that some users felt motion sick when presented with the more engaging content. This paper provides recommendations for the design of interfaces with the potential to improve trust and user experience within highly automated vehicles.  相似文献   

8.
Technological advances in the automotive industry are bringing automated driving closer to road use. However, one of the most important factors affecting public acceptance of automated vehicles (AVs) is the public’s trust in AVs. Many factors can influence people’s trust, including perception of risks and benefits, feelings, and knowledge of AVs. This study aims to use these factors to predict people’s dispositional and initial learned trust in AVs using a survey study conducted with 1175 participants. For each participant, 23 features were extracted from the survey questions to capture his/her knowledge, perception, experience, behavioral assessment, and feelings about AVs. These features were then used as input to train an eXtreme Gradient Boosting (XGBoost) model to predict trust in AVs. With the help of SHapley Additive exPlanations (SHAP), we were able to interpret the trust predictions of XGBoost to further improve the explainability of the XGBoost model. Compared to traditional regression models and black-box machine learning models, our findings show that this approach was powerful in providing a high level of explainability and predictability of trust in AVs, simultaneously.  相似文献   

9.
An experiment on adaptive automation is described. Reliability of automated fault diagnosis, mode of fault management (manual vs. automated), and fault dynamics affect variables including root mean square error, avoidance of accidents and false shutdowns, subjective trust in the system, and operator self-confidence. Results are discussed in relation to levels of automation, models of trust and self-confidence, and theories of human-machine function allocation. Trust in automation but not self-confidence was strongly affected by automation reliability. Operators controlled a continuous process with difficulty only while performing fault management but could prevent unnecessary shutdowns. Final authority for decisions and action must be allocated to automation in time-critical situations.  相似文献   

10.
The purpose of this study was to examine the effects of vehicle automation and automation failures on driving performance. Previous studies have revealed problems with driving performance in situations with automation failures and attributed this to drivers being out-of-the-loop. It was therefore hypothesized that driving performance is safer with lower than with higher levels of automation. Furthermore, it was hypothesized that driving performance would be affected by the extent of the automation failure. A moving base driving simulator was used. The design contained semi-automated and highly automated driving combined with complete, severe, and moderate deceleration failures. In total the study involved 36 participants. The results indicate that driving performance degrades when the level of automation increases. Furthermore, it is indicated that car drivers are worse at handling complete than partial deceleration failures.  相似文献   

11.
Perceived risk and trust are crucial for user acceptance of driving automation. In this study, we identify important predictors of perceived risk and trust in a driving simulator experiment and develop models through stepwise regression to predict event-based changes in perceived risk and trust. 25 participants were tasked to monitor SAE Level 2 driving automation (ACC + LC) while experiencing merging and hard braking events with varying criticality on a motorway. Perceived risk and trust were rated verbally after each event, and continuous perceived risk, pupil diameter and ECG signals were explored as possible indictors for perceived risk and trust.The regression models show that relative motion with neighbouring road users accounts for most perceived risk and trust variations, and no difference was found between hard braking with merging and hard braking without merging. Drivers trust the automation more in the second exposure to events. Our models show modest effects of personal characteristics: experienced drivers are less sensitive to risk and trust the automation more, while female participants perceive more risk than males. Perceived risk and trust highly correlate and have similar determinants. Continuous perceived risk accurately reflects participants’ verbal post-event rating of perceived risk; the use of brakes is an effective indicator of high perceived risk and low trust, and pupil diameter correlates to perceived risk in the most critical events. The events increased heart rate, but we found no correlation with event criticality. The prediction models and the findings on physiological measures shed light on the event-based dynamics of perceived risk and trust and can guide human-centred automation design to reduce perceived risk and enhance trust.  相似文献   

12.
自动驾驶是当前智能汽车发展的重要方向。在实现完全自动化驾驶前, 驾驶员和自动驾驶系统共享车辆控制权, 协同完成驾驶任务。在该人-机共驾阶段, 人对自动驾驶系统的信任是影响自动驾驶中人机协同效率与驾驶安全的关键要素; 驾驶员对自动驾驶车辆保持适当的信任水平对驾驶安全至关重要。本研究结合信任的发展阶段与影响因素提出了动态信任框架。该框架将信任发展分为倾向性信任、初始信任、实时信任和事后信任四个发展阶段, 并结合操作者特征(人)、系统特征(自动驾驶车系统)、情境特征(环境)三个关键因素分析不同阶段的核心影响因素以及彼此间的内在关联。根据该框架, 信任校准可从监测矫正、驾驶员训练、优化HMI设计三类途径展开。未来研究应更多关注驾驶员和人机系统设计特征对信任的影响, 考察信任的实时测量和功能特异性, 探讨驾驶员和系统的相互信任机制, 以及提升信任研究的外部效度。  相似文献   

13.
Trust is regarded as one of the main predictors for adopting automated buses (ABs). However, theories about trust (development) in technology generally vary and an in-depths study about trust in ABs specifically is still outstanding. The present study fills this gap by presenting results from focus group interviews to trust (development) in shared automated buses prior to exposure. The objectives of this study are to contrast participants’ naïve concepts of trust with theory and to identify underlying factors influencing a-priori trust in ABs. Results show that the N = 21 focus group participants use different strategies to familiarise themselves with the new technology of ABs, e.g., comparisons with familiar technologies, fundamental tendencies to approach or avoid, additional information seeking, or anthropomorphisation. These strategies largely support existing theories on trust (development) in technology. Differences between naïve interpretations of trust and its theoretical assumptions were found in focus group debates where more control over technology limited uncertainty and led to more trust. While theories suggest control and trust to be incompatible opposites, participants see control as a way to enhance trust. We provide starting points for further theory development and expansion and stress the importance of explanations in emerging technologies for trust and acceptance building.  相似文献   

14.
This study applied the Theory of Planned Behaviour (TPB) to assess individuals’ intentions to use fully automated shared passenger shuttles when they become publicly available. In addition, perceived trust was assessed to examine the extent to which this variable could account for additional variance in intentions above the TPB constructs of attitudes, subjective norms, and perceived behavioural control (PBC). Further, and also guided by the TPB, the study explored the differences in behavioural, normative, and control beliefs between individuals who reported high intentions to use automated passenger shuttles in the future (high intenders) and individuals who reported low intentions to use fully automated shared passenger shuttles in the future (low intenders). Participants (N = 438; 64% female) aged between 17 and 84 years (Mage = 35.42 years) were asked to complete an online questionnaire which took approximately 15 min. The findings revealed that attitudes, subjective norms, and PBC were significant positive predictors of intentions to use fully automated shared passenger shuttles when they become publicly available. When perceived trust was added to the hierarchical regression, this variable was shown to account for additional significant variance in intentions above the TPB constructs and was shown to be a significant positive predictor of intentions. Further, the results revealed significant differences in beliefs held by high and low intenders. Specifically, high intenders held significantly more positive beliefs towards fully automated shared passenger shuttles than low intenders. In turn, low intenders held significantly more negative beliefs towards these vehicles than high intenders. Overall, these findings provide support for the utility of the TPB in examining individuals’ intentions to use fully automated shared passenger shuttles when they become publicly available.  相似文献   

15.
What will cyclists do in future conflict situations with automated cars at intersections when the cyclist has the right of way? In order to explore this, short high-quality animation videos of conflicts between a car and a cyclist at five different intersections were developed. These videos were ‘shot’ from the perspective of the cyclist and ended when a collision was imminent should the car or the bicyclist not slow down. After each video participants indicated whether they would slow down or continue cycling, how confident they were about this decision, what they thought the car would do, and how confident they were about what the car would do. The appearance of the approaching car was varied as within-subjects variable with 3 levels (Car type): automated car, automated car displaying its intentions to the cyclists, and traditional car. In all situations the cyclist had right of way. Of each conflict, three versions were made that differed in the moment that the video ended by cutting off fractions from the longest version, thus creating videos with an early, mid, and late moment for the cyclist to decide to continue cycling or to slow down (Decision moment). Before the video experiment started the participants watched an introductory video about automated vehicles that served as prime. This video was either positive, negative, or neutral about automated vehicles (Prime type). Both Decision moment and Prime type were between subject variables. After the experiment participants completed a short questionnaire about trust in technology and trust in automated vehicles. 1009 participants divided in nine groups (one per Decision moment and Prime) completed the online experiment in which they watched fifteen videos (5 conflicts × 3 car types). The results show that participants more often yielded when the approaching car was an automated car than when it was a traditional car. However, when the approaching car was an automated car that could communicate its intentions, they yielded less often than for a traditional car. The earlier the Decision moment, the more often participants yielded but this increase in yielding did not differ between the three car types. Participants yielded more often for automated cars (both types) after they watched the negative prime video before the experiment than when they watched the positive video. The less participants trusted technology, and the capabilities of automated vehicles in particular, the more they were inclined to slow down in the conflict situations with automated cars. The association between trust and yielding was stronger for trust in the capabilities of automated vehicles than for trust in technology in general.  相似文献   

16.
During highly automated driving (level 3 automation according to SAE International, 2014) people are likely to increase the frequency of secondary task interactions. However, the driver must still be able to take over control within a reasonable amount of time. Previous studies mainly investigated take-over behavior by forcing participants to engage in secondary tasks prior to take over, and barely addressed how drivers voluntarily schedule secondary task processing according to the availability and predictability of automated driving modes. In the current simulator study 20 participants completed a test drive with alternating sections of manual and highly automated driving. One group had a preview on the availability of the automated driving system in upcoming sections of the track (predictive HMI), while the other drivers served as a control group. A texting task was offered during both driving modes and also prior to take-over situations. Participants were free to accept or reject a given task, taking the situational demands into account. Drivers accepted more tasks during highly automated driving. Furthermore, tasks were rejected more often prior to take-over situations in the predictive HMI group. This was accompanied by safer take-over performance. However, once engaged in a task, drivers tended to continue texting even in take-over situations. The results indicate the need to discriminate different aspects of task handling regarding self-regulation: task engagement and disengagement.  相似文献   

17.

The action-specific account of perception suggests that our perceptual system is influenced by information about our ability to act in our environment and, thus, affects our perception. However, the specific information about action that is influential for perception is still largely unknown. For example, if a goal is achieved through automation rather than action, is perception influenced because the goal was achieved or is perception immune because the act was automated rather than performed by the observer? In four experiments, we examined whether automating a paddle to block a moving ball in a computer game similar to Pong affects perception of the ball’s speed. Results indicate that the automation used here did not affect speed perception of the target. Whereas tools such as reach-extending sticks and various-sized paddles are both incorporated into one’s body schema and also influence spatial perception, automation, our results imply that automation is not incorporated into one’s body schema and does not affect spatial perception. The dissociation in how the mind treats tools versus automation could have several implications as automation becomes more prevalent in daily life.

  相似文献   

18.
The present study examined the antecedents of trust among operational Air Force fighter pilots for an automatic ground collision avoidance technology. This technology offered a platform with high face validity for studying trust in automation because it is an automatic system currently being used in operations by the Air Force. Pilots (N = 142) responded to an online survey which asked about their attitudes toward the technology and assessed a number of psychological factors. Consistent with prior research on trust in automation, a number of trust antecedents were identified which corresponded to human factors, learned trust factors, and situational factors. Implications for the introduction of novel automatic systems into the military are discussed.  相似文献   

19.
This study investigated the utility of emotional expression for human decision aids, when human aids conflict with an automated decision support system (DSS). The increasing presence of automation in society has resulted in critical, and often life threatening, situations when information from human and automated sources disagree. It has been known that reliance on human aids decrease during high-risk situations, while reliance on automated aids increase. However, it is also possible that human decision aids gain credibility from users when they embody the charismatic and emotionally expressive gesticulations seen in successful organizational leaders. The present study tested how a human agent's expressiveness when providing information would influence participants' behavioral reliance. Using the program Convoy Leader, participants (n=56) engaged in three decision-making scenarios where risk was manipulated as a within-subject factor and emotional expression as a between-subject factor. Emotional susceptibility, perceived risk, and trust for human as well as automated aids were measured. Overall trust was higher for the automated tool than human decision aid, and that pattern was amplified in conditions without an emotionally expressive human aid. Reliance was greater for emotionally expressive human aids, than stoic human aids, particularly during high risk conditions. The findings suggest that emotional expression of a human aid significantly impacts both reliance and trust of a decision aid, especially at higher risk levels. Emotionally expressive human agents should be utilized in decision conflicts where the automated system has certainly failed.  相似文献   

20.
Automated diagnostic aids prone to false alarms often produce poorer human performance in signal detection tasks than equally reliable miss-prone aids. However, it is not yet clear whether this is attributable to differences in the perceptual salience of the automated aids' misses and false alarms or is the result of inherent differences in operators' cognitive responses to different forms of automation error. The present experiments therefore examined the effects of automation false alarms and misses on human performance under conditions in which the different forms of error were matched in their perceptual characteristics. Young adult participants performed a simulated baggage x-ray screening task while assisted by an automated diagnostic aid. Judgments from the aid were rendered as text messages presented at the onset of each trial, and every trial was followed by a second text message providing response feedback. Thus, misses and false alarms from the aid were matched for their perceptual salience. Experiment 1 found that even under these conditions, false alarms from the aid produced poorer human performance and engendered lower automation use than misses from the aid. Experiment 2, however, found that the asymmetry between misses and false alarms was reduced when the aid's false alarms were framed as neutral messages rather than explicit misjudgments. Results suggest that automation false alarms and misses differ in their inherent cognitive salience and imply that changes in diagnosis framing may allow designers to encourage better use of imperfectly reliable automated aids.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号