TBQ(σ): improving efficiency of trace utilization for off-policy reinforcement learning

Longxiang Shi, Shijian Li*, Longbing Cao, Long Yang, Gang Pan

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

4 Citations (Scopus)

Abstract

Off-policy reinforcement learning with eligibility traces faces is challenging because of the discrepancy between target policy and behavior policy. One common approach is to measure the difference between two policies in a probabilistic way, such as importance sampling and tree-backup. However, existing off-policy learning methods based on probabilistic policy measurement are inefficient when utilizing traces under a greedy target policy, which is ineffective for control problems. The traces are cut immediately when a non-greedy action is taken, which may lose the advantage of eligibility traces and slow down the learning process. Alternatively, some non-probabilistic measurement methods such as General Q(λ) and Naive Q(λ) never cut traces, but face convergence problems in practice. To address the above issues, this paper introduces a new method named TBQ(σ), which effectively unifies the tree-backup algorithm and Naive Q(λ). By introducing a new parameter σ to illustrate the degree of utilizing traces, TBQ(σ) creates an effective integration of TB(λ) and Naive Q(λ) and continuous role shift between them. The contraction property of TB(σ) is theoretically analyzed for both policy evaluation and control settings. We also derive the online version of TBQ(σ) and give the convergence proof. We empirically show that, for ϵ ∈ (0, 1] in ϵ-greedy policies, there exists some degree of utilizing traces for λ ∈ [0, 1], which can improve the efficiency in trace utilization for off-policy reinforcement learning, to both accelerate the learning process and improve the performance.
Original languageEnglish
Title of host publicationAAMAS '19
Subtitle of host publicationproceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems
Place of PublicationRichland, SC
PublisherInternational Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Pages1025-1032
Number of pages8
ISBN (Electronic)9781450363099
Publication statusPublished - 2019
Externally publishedYes
Event18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019 - Montreal, Canada
Duration: 13 May 201917 May 2019

Conference

Conference18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019
Country/TerritoryCanada
CityMontreal
Period13/05/1917/05/19

Keywords

  • Reinforcement learning
  • Eligibility traces
  • Deep learning

Fingerprint

Dive into the research topics of 'TBQ(σ): improving efficiency of trace utilization for off-policy reinforcement learning'. Together they form a unique fingerprint.

Cite this