Abstract
Off-policy reinforcement learning with eligibility traces
faces is challenging because of the discrepancy between target policy and behavior
policy. One common approach is to measure the difference between two policies
in a probabilistic way, such as importance sampling and tree-backup. However,
existing off-policy learning methods based on probabilistic policy measurement
are inefficient when utilizing traces under a greedy target policy, which is
ineffective for control problems. The traces are cut immediately when a
non-greedy action is taken, which may lose the advantage of eligibility traces
and slow down the learning process. Alternatively, some non-probabilistic
measurement methods such as General Q(λ) and Naive Q(λ) never cut traces, but
face convergence problems in practice. To address the above issues, this paper
introduces a new method named TBQ(σ), which effectively unifies the tree-backup
algorithm and Naive Q(λ). By introducing a new parameter σ to illustrate the
degree of utilizing traces, TBQ(σ) creates an effective integration of TB(λ)
and Naive Q(λ) and continuous role shift between them. The contraction property
of TB(σ) is theoretically analyzed for both policy evaluation and control
settings. We also derive the online version of TBQ(σ) and give the convergence
proof. We empirically show that, for ϵ ∈ (0, 1] in ϵ-greedy policies, there exists some degree of
utilizing traces for λ ∈ [0,
1], which can improve the efficiency in trace utilization for off-policy
reinforcement learning, to both accelerate the learning process and improve the
performance.
Original language | English |
---|---|
Title of host publication | AAMAS '19 |
Subtitle of host publication | proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems |
Place of Publication | Richland, SC |
Publisher | International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) |
Pages | 1025-1032 |
Number of pages | 8 |
ISBN (Electronic) | 9781450363099 |
Publication status | Published - 2019 |
Externally published | Yes |
Event | 18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019 - Montreal, Canada Duration: 13 May 2019 → 17 May 2019 |
Conference
Conference | 18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019 |
---|---|
Country/Territory | Canada |
City | Montreal |
Period | 13/05/19 → 17/05/19 |
Keywords
- Reinforcement learning
- Eligibility traces
- Deep learning