首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Modeling dispositional and initial learned trust in automated vehicles with predictability and explainability
Institution:1. Virtual Vehicle Research GmbH, Inffeldgasse 21a, Graz 8010, Austria;2. Department of Production and Operations Management, University of Graz, Universitaetsstraße 15/E3, Graz 8010, Austria;3. Trafficon - Traffic Consultants GmbH, Strubergasse 26, Salzburg 5020, Austria
Abstract:Technological advances in the automotive industry are bringing automated driving closer to road use. However, one of the most important factors affecting public acceptance of automated vehicles (AVs) is the public’s trust in AVs. Many factors can influence people’s trust, including perception of risks and benefits, feelings, and knowledge of AVs. This study aims to use these factors to predict people’s dispositional and initial learned trust in AVs using a survey study conducted with 1175 participants. For each participant, 23 features were extracted from the survey questions to capture his/her knowledge, perception, experience, behavioral assessment, and feelings about AVs. These features were then used as input to train an eXtreme Gradient Boosting (XGBoost) model to predict trust in AVs. With the help of SHapley Additive exPlanations (SHAP), we were able to interpret the trust predictions of XGBoost to further improve the explainability of the XGBoost model. Compared to traditional regression models and black-box machine learning models, our findings show that this approach was powerful in providing a high level of explainability and predictability of trust in AVs, simultaneously.
Keywords:Trust prediction  XGBoost  SHAP explainer  Feature importance  Automated vehicles
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号