Using ILP to improve planning in hierarchical reinforcement learning

Mark Reid, Malcolm Ryan

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

7 Citations (Scopus)

Abstract

Hierarchical reinforcement learning has been proposed as a solution to the problem of scaling up reinforcement learning. The RLTOPs Hierarchical Reinforcement Learning System is an implementation of this proposal which structures an agent’s sensors and actions into various levels of representation and control. Disparity between levels of representation means actions can be misused by the planning algorithm in the system. This paper reports on how ILP was used to bridge these representation gaps and shows empirically how this improved the system’s performance. Also discussed are some of the problems encountered when using an ILP system in what is inherently a noisy and incremental domain.

Original languageEnglish
Title of host publicationInductive Logic Programming - 10th International Conference, ILP 2000, Proceedings
Place of PublicationBerlin; Heidelberg
PublisherSpringer, Springer Nature
Pages174-190
Number of pages17
Volume1866
ISBN (Print)354067795X, 9783540677956
Publication statusPublished - 2000
Externally publishedYes
Event10th International Conference on Inductive Logic Programming, ILP 2000 - London, United Kingdom
Duration: 24 Jul 200027 Jul 2000

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume1866
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other10th International Conference on Inductive Logic Programming, ILP 2000
Country/TerritoryUnited Kingdom
CityLondon
Period24/07/0027/07/00

Fingerprint

Dive into the research topics of 'Using ILP to improve planning in hierarchical reinforcement learning'. Together they form a unique fingerprint.

Cite this