Communicating intent to develop shared situation awareness and engender trust in human-agent teams |
| |
Affiliation: | 1. US Army Research Laboratory, Aberdeen Proving Ground, MD, USA;2. US Army TARDEC, Warren, MI, USA;3. Robotic Research, LLC, Gaithersburg, MD, USA;1. Department of Systems Engineering and Management, Air Force Institute of Technology, 2950 Hobson Way, (AFIT/ENV), Wright Patterson AFB, OH 45433, United States;2. Department of Electrical and Computer Engineering, Air Force Institute of Technology, 2950 Hobson Way, (AFIT/ENV), Wright Patterson AFB, OH 45433, United States;1. Human Systems Engineering, Arizona State University, Mesa, AZ, USA;2. Department of Psychology, Arizona State University, Tempe, AZ, USA;1. Global Big Data Technologies Centre, School of Systems, Management and leadership, Faculty of Engineering and IT, University of Technology Sydney (UTS), Australia;2. Centre for Quantum Computation & Intelligent Systems, School of Software, Faculty of Engineering and IT, University of Technology Sydney (UTS), Australia;1. School of Informatics, University of Skövde, Sweden;2. Dept. of Computer & Information Science, Linköping University, Sweden;3. US Army Research Laboratory, USA;4. SA Technologies, USA |
| |
Abstract: | This paper addresses issues related to integrating autonomy-enabled, intelligent agents into collaborative, human-machine teams. Interaction with intelligent machine agents capable of making independent, goal-directed decisions in human-machine teaming operations constitutes a major change from traditional human-machine interaction involving teleoperation. Communicating the machine agent’s intent to human counterparts becomes increasingly important as independent machine decisions become subject to human trust and mental models. The authors present findings from their research that suggest existing user display technologies, tailored with context-specific information and the human’s knowledge level of the machine agent’s decision process, can mitigate misperceptions of the appropriateness of agent behavioral responses. This is important because misperceptions on the part of human team members increases the likelihood of trust degradation and unnecessary interventions, ultimately leading to disuse of the agent. Examples of possible issues associated with communicating agent intent, as well as potential implications for trust calibration are provided. |
| |
Keywords: | Human-agent teaming Intent Shared situation awareness Trust User displays |
本文献已被 ScienceDirect 等数据库收录! |
|