Can we predict the outcomes of deep learning algorithms that simulate and replace professional skills? Understanding the threat of artificial intelligence

J. Michael Innes, Ben Morrison

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

Abstract

Technological developments in artificial intelligence (AI) have produced startling outcomes in the ability of learning algorithms to outperform humans. The threat to employment from AI has spread more widely than has been previously envisaged and now encroaches upon the future employment of professionals. Specifically, legal practitioners, accountants, financial advisors and members of a spectrum of health professions are affected. The helping professions, including psychology, have in turn had the spotlight turned on them. This paper will briefly outline the threats that are forthcoming in the employment of psychologists. The main thrust of the paper, however, is concerned with an analysis of the nature of the deep learning algorithms used to simulate human thought and behaviour. These algorithms produce reliable simulations of behaviour. They also develop creative and highly innovative solutions, going beyond the capabilities of many humans. Humans now learn from the output of the algorithms. A problem, however, is that, while the algorithms are the product of human thought, their actual processing is opaque; humans cannot inquire into what is going on during the process. They only see the output. The paradox is that the development of AI mimics methodological behaviourism, a former paradigm in experimental psychology. Our work strongly suggests that the information entered into the initial assumptions from which the algorithms proceed may be deeply and yet subtly biased, due to methodological flaws in the design of experiments. The outcomes may, therefore, have undesirable and unforeseen consequences. In understanding the burgeoning threat of AI, professionals must be alert to the biases that exist, to enable a defence against replacement. Psychology has a role to play in developing this defence through its history of behaviouristic methodology and its understanding of the inherent flaws in the design of human experiments.
Original languageEnglish
Title of host publicationInnovations in a changing world
EditorsKatrina Andrews, Fiona Ann Papps, Vincent Mancini, Larissa Clarkson, Kathryn Nicholson Perry, Graeme Senior, Eric Brymer
Place of PublicationSydney, NSW
PublisherAustralian College of Applied Psychology
Pages155-167
Number of pages12
ISBN (Print)9780987630902
Publication statusPublished - 2020
Externally publishedYes
EventAustralian College of Applied Psychology Conference 2019: Innovations in a changing world - Melbourne, Australia
Duration: 28 Oct 201928 Oct 2019

Conference

ConferenceAustralian College of Applied Psychology Conference 2019
Country/TerritoryAustralia
CityMelbourne
Period28/10/1928/10/19

Keywords

  • artificial Intelligence
  • employment
  • professions
  • future

Fingerprint

Dive into the research topics of 'Can we predict the outcomes of deep learning algorithms that simulate and replace professional skills? Understanding the threat of artificial intelligence'. Together they form a unique fingerprint.

Cite this