BAIL: Best-Action Imitation Learning for batch deep reinforcement learning

Xinyue Chen, Zijian Zhou, Zheng Wang, Che Wang, Yanqiu Wu, Keith Ross*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

37 Citations (Scopus)


There has recently been a surge in research in batch Deep Reinforcement Learning (DRL), which aims for learning a high-performing policy from a given dataset without additional interactions with the environment. We propose a new algorithm, Best-Action Imitation Learning (BAIL), which strives for both simplicity and performance. BAIL learns a V function, uses the V function to select actions it believes to be high-performing, and then uses those actions to train a policy network using imitation learning. For the MuJoCo benchmark, we provide a comprehensive experimental study of BAIL, comparing its performance to four other batch Q-learning and imitation-learning schemes for a large variety of batch datasets. Our experiments show that BAIL’s performance is much higher than the other schemes, and is also computationally much faster than the batch Q-learning schemes.

Original languageEnglish
Title of host publication34th Conference on Neural Information Processing Systems (NeurIPS 2020)
EditorsH. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, H. Lin
Place of PublicationSan Diego
PublisherNeural Information Processing Systems (NIPS) Foundation
Number of pages11
ISBN (Electronic)9781713829546
Publication statusPublished - 2020
Externally publishedYes
Event34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online
Duration: 6 Dec 202012 Dec 2020

Publication series

NameAdvances in Neural Information Processing Systems


Conference34th Conference on Neural Information Processing Systems, NeurIPS 2020
CityVirtual, Online


Dive into the research topics of 'BAIL: Best-Action Imitation Learning for batch deep reinforcement learning'. Together they form a unique fingerprint.

Cite this