论文标题

基于人类信任的反馈控制:动态变化的自动化透明度以优化人机相互作用

Human Trust-based Feedback Control: Dynamically varying automation transparency to optimize human-machine interactions

论文作者

Akash, Kumar, McMahon, Griffon, Reid, Tahira, Jain, Neera

论文摘要

人类对自动化的信任在人类与自动化之间的互动中起着至关重要的作用。尽管缺乏信任会导致人类的自动化废弃,但过度信任会导致人类信任错误的自主系统,这可能会对人类产生负面影响。因此,应校准人类的信任,以优化针对上下文特定绩效目标的人机相互作用。在本文中,我们提出了一个概率框架,可以在与智能决策AID系统的互动过程中对人类的信任和工作量动态进行建模和校准。通过改变自动化的透明度来实现此校准 - 提供给人类的信息的数量和效用。模型的参数化是使用人类受试者实验收集的行为数据进行的,并在实验验证了三个反馈控制策略并与非自适应决策AID系统进行了比较。结果表明,当根据提出的控制策略动态更新透明度时,可以优化人工自动化团队的绩效。该框架是朝着广泛设计和实现实时自适应自动化的第一步,用于人机相互作用。

Human trust in automation plays an essential role in interactions between humans and automation. While a lack of trust can lead to a human's disuse of automation, over-trust can result in a human trusting a faulty autonomous system which could have negative consequences for the human. Therefore, human trust should be calibrated to optimize human-machine interactions with respect to context-specific performance objectives. In this article, we present a probabilistic framework to model and calibrate a human's trust and workload dynamics during his/her interaction with an intelligent decision-aid system. This calibration is achieved by varying the automation's transparency---the amount and utility of information provided to the human. The parameterization of the model is conducted using behavioral data collected through human-subject experiments, and three feedback control policies are experimentally validated and compared against a non-adaptive decision-aid system. The results show that human-automation team performance can be optimized when the transparency is dynamically updated based on the proposed control policy. This framework is a first step toward widespread design and implementation of real-time adaptive automation for use in human-machine interactions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源