Most existing recommenders focus on providing users with a list of recommended products. In practical, users may only pay attention to the recommendations at the top positions. Our analysis, however, shows that the correctly recommended products from existing methods are not top posited; this would result in sacrificing users’ patience and engagements. To address this issue, this paper proposes a top-aware recommender distillation (TRD) framework, viz., the rank of recommendation lists from given state-of-the-art recommendation approaches are further reinforced and refined using reinforcement learning. Different from the traditional goal of knowledge distillation to mimic the behavior of its teacher, our recommender contrastingly goes step further with the goal of surpassing its teacher recommender. More importantly, our proposed TRD can be plugged on any existed recommender; thus being generic for real-world deployment. Theoretical analysis demonstrates that TRD is guaranteed to perform at least as well as the basic recommender. Extensive experiments on five state-of-the-art recommendation algorithms across three real-world datasets further show that TRD consistently improves the recommendations at top positions.
- Deep reinforcement learning
- Top-aware recommender systems
- Knowledge distillation