Scaffolding deep reinforcement learning agents using dynamical perceptual-motor primitives

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

70 Downloads (Pure)

Abstract

Agents trained using deep reinforcement learning (DRL) are capable of meeting or exceeding human-levels of performance in multi-agent tasks. However, the behaviors exhibited by these agents are not guaranteed to be human-like or human-compatible. This poses a problem if the goal is to design agents capable of collaborating with humans in cooperative or team-based tasks. Previous approaches to encourage the development of human-compatible agents have relied on pre-recorded human data during training. However, such data is not available for the majority of everyday tasks. Importantly, research on human perceptual-motor behavior has found that task-directed behavior is often low-dimensional and can be decomposed into a defined set of dynamical perceptual-motor primitives (DPMPs). Accordingly, we propose a hierarchical approach to simplify DRL training by defining the action dynamics of agents using DPMPs at the lower level, while using DRL to train the decision-making dynamics of agents at the higher level. We evaluate our approach using a multi-agent shepherding task used to study human and multi-agent coordination. Our hierarchical, DRL-DPMP approach resulted in agents which trained faster than vanilla, black-box DRL agents. Further, the hierarchical agents reached higher levels of performance not only when interacting with each other during self-play, but also when completing the task alongside agents embodying models of novice and expert human behavior. Finally, the hierarchical DRL-DPMP agents developed decision-making policies that outperformed heuristic-based agents used in previous research in human-agent coordination.
Original languageEnglish
Title of host publicationProceedings of the 45th Annual Conference of the Cognitive Science Society
EditorsM. Goldwater, F. K. Anggoro, B. K. Hayes, D. C. Ong
Place of PublicationSeattle, WA
PublisherCognitive Science Society
Pages1981-1989
Number of pages9
Publication statusPublished - 2023
EventAnnual Conference of the Cognitive Science Society (45th : 2023) - Sydney, Australia
Duration: 26 Jul 202329 Jul 2023

Publication series

NameAnnual Conference of the Cognitive Science Society
Volume45
ISSN (Electronic)1069-7977

Conference

ConferenceAnnual Conference of the Cognitive Science Society (45th : 2023)
Country/TerritoryAustralia
CitySydney
Period26/07/2329/07/23

Bibliographical note

Copyright the Author(s) 2023. Version archived for private and non-commercial use with the permission of the author/s and according to publisher conditions. For further rights please contact the publisher.

Keywords

  • hierarchical deep reinforcement learning
  • dynamical perceptual-motor primitives (DPMPs)
  • multi-agent coordination
  • emergent coordination
  • shepherding

Fingerprint

Dive into the research topics of 'Scaffolding deep reinforcement learning agents using dynamical perceptual-motor primitives'. Together they form a unique fingerprint.

Cite this