TY - CHAP
T1 - What are we measuring anyway?
T2 - 31st Benelux Conference on Artificial Intelligence and the 28th Belgian Dutch Conference on Machine Learning, BNAIC/BENELEARN 2019
AU - Bruijnes, Merijn
AU - Fitrianie, Siska
AU - Richards, Deborah
AU - Abdulrahman, Amal
AU - Brinkman, Willem Paul
N1 - Copyright the Author(s) 2019. Version archived for private and non-commercial use with the permission of the author/s and according to publisher conditions. For further rights please contact the publisher.
PY - 2019
Y1 - 2019
N2 - Research into artificial social agents aims at constructing these agents and at establishing an empirically grounded understanding of them, their interaction with humans, and how they can ultimately deliver certain outcomes in areas such as health, entertainment, and education. Key for establishing such understanding is the community’s ability to describe and replicate their observations on how users perceive and interact with their agents. In this paper, we address this ability by examining questionnaires and their constructs used in empirical studies reported in the intelligent virtual agent conference proceedings from 2013 to 2018. The literature survey shows the identification of 189 constructs used in 89 questionnaires that were reported across 81 papers. We found unexpectedly little repeated use of questionnaires as the vast majority of questionnaires (more than 76%) were only reported in a single paper. We expect that this finding will motivate joint effort by the IVA community towards creating a unified measurement instrument and in the broader AI community a renewed interest in replicability of our (user) studies.
AB - Research into artificial social agents aims at constructing these agents and at establishing an empirically grounded understanding of them, their interaction with humans, and how they can ultimately deliver certain outcomes in areas such as health, entertainment, and education. Key for establishing such understanding is the community’s ability to describe and replicate their observations on how users perceive and interact with their agents. In this paper, we address this ability by examining questionnaires and their constructs used in empirical studies reported in the intelligent virtual agent conference proceedings from 2013 to 2018. The literature survey shows the identification of 189 constructs used in 89 questionnaires that were reported across 81 papers. We found unexpectedly little repeated use of questionnaires as the vast majority of questionnaires (more than 76%) were only reported in a single paper. We expect that this finding will motivate joint effort by the IVA community towards creating a unified measurement instrument and in the broader AI community a renewed interest in replicability of our (user) studies.
UR - http://www.scopus.com/inward/record.url?scp=85075061199&partnerID=8YFLogxK
UR - http://ceur-ws.org/Vol-2491/
M3 - Conference abstract
AN - SCOPUS:85075061199
T3 - CEUR Workshop Proceedings
BT - BNAIC/BENELEARN 2019: Proceedings of the 31st Benelux Conference on Artificial Intelligence (BNAIC 2019) and the 28th Belgian Dutch Conference on Machine Learning (Benelearn 2019)
A2 - Beuls, Katrien
A2 - Bogaerts, Bart
A2 - Bontempi, Gianluca
A2 - Geurts, Pierre
A2 - Harley, Nick
A2 - Lebichot, Bertrand
A2 - Lenaerts, Tom
A2 - Louppe, Gilles
A2 - Van Eecke, Paul
PB - RWTH Aachen University
CY - Aachen
Y2 - 6 November 2019 through 8 November 2019
ER -