Do I trust a machine? Differences in user trust based on system performance

Kun Yu, Shlomo Berkovsky, Dan Conway, Ronnie Taib, Jianlong Zhou, Fang Chen

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

Trust plays an important role in various user-facing systems and applications. It is particularly important in the context of decision support systems, where the system’s output serves as one of the inputs for the users’ decision making processes. In this chapter, we study the dynamics of explicit and implicit user trust in a simulated automated quality monitoring system, as a function of the system accuracy. We establish that users correctly perceive the accuracy of the system and adjust their trust accordingly. The results also show notable differences between two groups of users and indicate a possible threshold in the acceptance of the system. This important learning can be leveraged by designers of practical systems for sustaining the desired level of user trust.
Original languageEnglish
Title of host publicationHuman and machine learning
Subtitle of host publicationvisible, explainable, trustworthy and transparent
EditorsJianlong Zhou, Fang Chen
Place of PublicationCham, Switzerland
PublisherSpringer, Springer Nature
Chapter12
Pages245-264
Number of pages20
ISBN (Electronic)9783319904030
ISBN (Print)9783319904023
DOIs
Publication statusPublished - 2018
Externally publishedYes

Publication series

NameHuman-Computer Interaction Series
PublisherSpringer
ISSN (Print)1571-5035
ISSN (Electronic)2524-4477

Fingerprint

Dive into the research topics of 'Do I trust a machine? Differences in user trust based on system performance'. Together they form a unique fingerprint.

Cite this