TY - CHAP
T1 - Designing Trust in Highly Automated Virtual Assistants: A Taxonomy of Levels of Autonomy
AU - Galdon, Fernando
AU - Hall, Ashley
AU - Wang, Stephen Jia
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.
PY - 2021/2/28
Y1 - 2021/2/28
N2 - This paper presents a guiding framework and a multi-level taxonomy of automation levels specially adapted to Virtual Assistants in the context of Human-Human-Interaction. This trust-based framework incorporates interaction phases, trust-affecting design principles and design techniques. It also introduces a taxonomy of levels of autonomy explaining each level from a trust perspective. To test the proposed Levels a survey was conducted addressing different contexts. Participants preferred to have total control of the system. Level 1 is the preferred option on average. Levels 2 and 3, account for 40.50% of the participants preference to be in control of the autonomous system. If we combine levels 1, 2, and 3; This presents an average of 68.75% of participants demanding the initiative. The neutral level (level 4) is preferred by 15.75% of the participants on average. On Levels where the initiative resides on the system (levels 5, 6, and 7), only 14.75% of participants would decentralise their decision. Based on the research findings, the author recommends designers to combine a holistic perspective on trust with contextual awareness, to be able to integrate the impact of contexts on interactions. Trust formation is a dynamic process that starts before a user’s first contact with the system and continues long thereafter. Furthermore, as autonomous systems continuously evolve, factors-affecting trust change during user interactions with the system and over time; thus, Human-Human-Interaction concepts need to be able to adapt. Future work will be dedicated to further understanding other areas such as reparation and accountability.
AB - This paper presents a guiding framework and a multi-level taxonomy of automation levels specially adapted to Virtual Assistants in the context of Human-Human-Interaction. This trust-based framework incorporates interaction phases, trust-affecting design principles and design techniques. It also introduces a taxonomy of levels of autonomy explaining each level from a trust perspective. To test the proposed Levels a survey was conducted addressing different contexts. Participants preferred to have total control of the system. Level 1 is the preferred option on average. Levels 2 and 3, account for 40.50% of the participants preference to be in control of the autonomous system. If we combine levels 1, 2, and 3; This presents an average of 68.75% of participants demanding the initiative. The neutral level (level 4) is preferred by 15.75% of the participants on average. On Levels where the initiative resides on the system (levels 5, 6, and 7), only 14.75% of participants would decentralise their decision. Based on the research findings, the author recommends designers to combine a holistic perspective on trust with contextual awareness, to be able to integrate the impact of contexts on interactions. Trust formation is a dynamic process that starts before a user’s first contact with the system and continues long thereafter. Furthermore, as autonomous systems continuously evolve, factors-affecting trust change during user interactions with the system and over time; thus, Human-Human-Interaction concepts need to be able to adapt. Future work will be dedicated to further understanding other areas such as reparation and accountability.
UR - http://www.scopus.com/inward/record.url?scp=85102052192&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-61045-6_14
DO - 10.1007/978-3-030-61045-6_14
M3 - Chapter in an edited book (as author)
AN - SCOPUS:85102052192
T3 - Studies in Computational Intelligence
SP - 199
EP - 211
BT - Studies in Computational Intelligence
PB - Springer Science and Business Media Deutschland GmbH
ER -