Aggressive Q-learning with ensembles: achieving both high sample efficiency and high asymptotic performance

Yanqiu Wu, Xinyue Chen, Che Wang, Yiming Zhang, Keith Ross*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

Abstract

Recent advances in model-free deep reinforcement learning (DRL) show that simple model-free methods can be highly effective in challenging high-dimensional continuous control tasks. In particular, Truncated Quantile Critics (TQC) achieves state-of-the-art asymptotic training performance on the MuJoCo benchmark with a distributional representation of critics; and Randomized Ensemble Double Q-Learning (REDQ) achieves high sample efficiency that is competitive with state-of-the-art model-based methods using a high update-to-data ratio and target randomization. In this paper, we propose a novel model-free algorithm, Aggressive Q-Learning with Ensembles (AQE), which improves the sample-efficiency performance of REDQ and the asymptotic performance of TQC, thereby providing overall state-of-the-art performance during all stages of training. Moreover, AQE is very simple, requiring neither distributional representation of critics nor target randomization. The effectiveness of AQE is further supported by our extensive experiments, ablations, and theoretical results.
Original languageEnglish
Title of host publicationDeep Reinforcement Learning Workshop NeurIPS 2022
Publication statusAccepted/In press - 2022
Externally publishedYes

Keywords

  • Deep Reinforcement Learning (DRL)
  • Off-policy algorithms
  • Sample Efficiency

Fingerprint

Dive into the research topics of 'Aggressive Q-learning with ensembles: achieving both high sample efficiency and high asymptotic performance'. Together they form a unique fingerprint.

Cite this