Explainable recommendation systems (ERSs) have attracted increasing attention from researchers, which generate high-quality recommendations with intuitive explanations to help users make appropriate decisions. However, most of the existing ERSs are designed with an offline setting, which can hardly adjust their models using the online feedback instantly for improved performance. To overcome the limitations of ERSs with the offline setting, we propose a novel online setting for ERSs and devise an effective model called O3ERS in this online setting, which can perform online learning with good scalability and rigorous theoretical guides for better online recommendations and online explanations. O3ERS also addresses two challenging problems in real scenarios, namely, the sparsity and delay of online explanations’ feedback as well as the partialness and insufficiency of online recommendations’ feedback. Specifically, O3ERS not only instantly leverages the knowledge learned from the recommendations’ feedback to adjust the sparse and delayed explanations’ feedback for better explanations but also utilizes a novel exploitation–exploration strategy that incorporates the explanations’ feedback to adjust the partial and insufficient recommendations’ feedback for better recommendations. Our theoretical analysis and empirical studies on one simulated and two real-world datasets show that our model outperforms the state-of-the-art models in online scenarios remarkably.
- Explainable recommendation systems
- Online learning
- Factorization bandit