Striving for simplicity and performance in off-policy DRL: output normalization and non-uniform sampling

Che Wang, Yanqiu Wu, Quan Vuong, Keith Ross*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

Abstract

We aim to develop off-policy DRL algorithms that not only exceed state-of-The-Art performance but are also simple and minimalistic. For standard continuous control benchmarks, Soft Actor-Critic (SAC), which employs entropy maximization, currently provides state-of-The-Art performance. We frst demonstrate that the entropy term in SAC addresses action saturation due to the bounded nature of the action spaces, with this insight, we propose a streamlined algorithm with a simple normalization scheme or with inverted gradients. We show that both approaches can match SAC s sample effciency performance without the need of entropy maximization, we then propose a simple non-uniform sampling method for selecting transitions from the replay buffer during training. Extensive experimental results demonstrate that our proposed sampling scheme leads to state of the art sample effciency on challenging continuous control tasks. We combine all of our fndings into one simple algorithm, which we call Streamlined Off Policy with Emphasizing Recent Experience, for which we provide robust public-domain code.

Original languageEnglish
Title of host publicationICML'20
Subtitle of host publicationproceedings of the 37th International Conference on Machine Learning
EditorsHal Daumé, Aarti Singh
PublisherJMLR.org
Pages10012-10022
Number of pages11
Publication statusPublished - 2020
Externally publishedYes
Event37th International Conference on Machine Learning, ICML 2020 - Virtual, Online
Duration: 13 Jul 202018 Jul 2020

Conference

Conference37th International Conference on Machine Learning, ICML 2020
CityVirtual, Online
Period13/07/2018/07/20

Fingerprint

Dive into the research topics of 'Striving for simplicity and performance in off-policy DRL: output normalization and non-uniform sampling'. Together they form a unique fingerprint.

Cite this