TY - GEN
T1 - Using trust to determine user decision making and task outcome during a human-agent collaborative task
AU - Herse, Sarita
AU - Vitale, Jonathan
AU - Johnston, Benjamin
AU - Williams, Mary-Anne
PY - 2021/3/8
Y1 - 2021/3/8
N2 - Optimal performance of collaborative tasks requires consideration of the interactions between socially intelligent agents, such as social robots, and their human counterparts. The functionality and success of these systems lie in their ability to establish and maintain user trust; with too much or too little trust leading to over-reliance and under-utilisation, respectively. This problem highlights the need for an appropriate trust calibration methodology, with the work in this paper focusing on the first step: investigating user trust as a behavioural prior. Two pilot studies (Study 1 and 2) are presented, the results of which inform the design of Study 3. Study 3 investigates whether trust can determine user decision making and task outcome during a human-agent collaborative task. Results demonstrate that trust can be behaviourally assessed in this context using an adapted version of the Trust Game. Further, an initial behavioural measure of trust can significantly predict task outcome. Finally, assistance type and task difficulty interact to impact user performance. Notably, participants were able to improve their performance on the hard task when paired with correct assistance, with this improvement comparable to performance on the easy task with no assistance. Future work will focus on investigating factors that influence user trust during human-agent collaborative tasks and providing a domain-independent model of trust calibration.
AB - Optimal performance of collaborative tasks requires consideration of the interactions between socially intelligent agents, such as social robots, and their human counterparts. The functionality and success of these systems lie in their ability to establish and maintain user trust; with too much or too little trust leading to over-reliance and under-utilisation, respectively. This problem highlights the need for an appropriate trust calibration methodology, with the work in this paper focusing on the first step: investigating user trust as a behavioural prior. Two pilot studies (Study 1 and 2) are presented, the results of which inform the design of Study 3. Study 3 investigates whether trust can determine user decision making and task outcome during a human-agent collaborative task. Results demonstrate that trust can be behaviourally assessed in this context using an adapted version of the Trust Game. Further, an initial behavioural measure of trust can significantly predict task outcome. Finally, assistance type and task difficulty interact to impact user performance. Notably, participants were able to improve their performance on the hard task when paired with correct assistance, with this improvement comparable to performance on the easy task with no assistance. Future work will focus on investigating factors that influence user trust during human-agent collaborative tasks and providing a domain-independent model of trust calibration.
KW - trust
KW - decision making
KW - signal detection theory
KW - recommender system
KW - human-agent collaboration
KW - socially intelligent agent
UR - http://www.scopus.com/inward/record.url?scp=85102731118&partnerID=8YFLogxK
U2 - 10.1145/3434073.3444673
DO - 10.1145/3434073.3444673
M3 - Conference proceeding contribution
T3 - ACM/IEEE International Conference on Human-Robot Interaction
SP - 73
EP - 82
BT - HRI '21
PB - Association for Computing Machinery (ACM)
CY - New York
T2 - Annual ACM/IEEE International Conference on Human Robot Interaction (16th : 2021)
Y2 - 8 March 2021 through 11 March 2021
ER -