Top-aware recommender distillation with deep reinforcement learning

Hongyang Liu, Zhu Sun*, Xinghua Qu, Fuyong Yuan

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)
12 Downloads (Pure)


Most existing recommenders focus on providing users with a list of recommended products. In practical, users may only pay attention to the recommendations at the top positions. Our analysis, however, shows that the correctly recommended products from existing methods are not top posited; this would result in sacrificing users’ patience and engagements. To address this issue, this paper proposes a top-aware recommender distillation (TRD) framework, viz., the rank of recommendation lists from given state-of-the-art recommendation approaches are further reinforced and refined using reinforcement learning. Different from the traditional goal of knowledge distillation to mimic the behavior of its teacher, our recommender contrastingly goes step further with the goal of surpassing its teacher recommender. More importantly, our proposed TRD can be plugged on any existed recommender; thus being generic for real-world deployment. Theoretical analysis demonstrates that TRD is guaranteed to perform at least as well as the basic recommender. Extensive experiments on five state-of-the-art recommendation algorithms across three real-world datasets further show that TRD consistently improves the recommendations at top positions.
Original languageEnglish
Pages (from-to)642-657
Number of pages16
JournalInformation Sciences
Publication statusPublished - Oct 2021


  • Deep reinforcement learning
  • Top-aware recommender systems
  • Knowledge distillation


Dive into the research topics of 'Top-aware recommender distillation with deep reinforcement learning'. Together they form a unique fingerprint.

Cite this