Artificially intelligent (AI) systems are increasingly making decisions, previously made by humans, that relate to the management of people at work. Also referred to as ‘algorithmic management’, AI is already used to assess applicants in the recruitment and selection process (Chamorro-Premuzic & Akhtar, 2019) and for allocating work (Lee, Kusbit, Metsky, & Dabbish, 2015). As with many areas in which AI decision-making is being deployed, this raises questions about how fair or just employees believe these decisions to be, given they may have significant impacts on whether they have positive or negative experiences at work, and even whether they are employed in the first place. Generally, if employees believe organizational decision-making is fair, they are more likely to accept the decision, remain satisfied in their jobs, and even increase their level of effort in their jobs. However, employees’ perceptions of organizational injustice can lead to reduced effort, lower job satisfaction, lower organizational commitment, and higher likelihood of turnover. Existing research into fairness perceptions of AI decision-making in the workplace largely focus on procedural (how fair and reasonable are the procedures used to make a decision?) and distributive (how fair are the outcomes of a decision, such as the allocation of resources?) forms of justice (e.g. REFS). However, this empirical work suggests that in decision-making individuals also seek a ‘human touch’ and chafe against ‘being reduced to a percentage’ when algorithms are responsible for making decisions that impact them (Binns, VanKleek, Veale, Lyngs, Zhao, & Shadbolt, 2018). This implicates another justice perception, interactional justice, as being critical for better understanding the conditions under which employees perceive algorithmic decision-making to be fair or unfair. This leads to our overarching research questions: (1) how can algorithmic management be designed to improve employees’ perceptions of interactional justice? and (2) which types of workplace decisions made by algorithms lead to greater or less perceptions of interactional justice?
|Short title||AI decision-making with dignity|
|Effective start/end date||1/07/20 → 31/03/21|