Abstract
Large language model-based explainable recommendation (LLM-based ER) systems can provide remarkable human-like explanations and have widely received attention from researchers. However, the original LLM-based ER systems face three low-quality problems in their generated explanations, i.e., lack of personalization, inconsistency, and questionable explanation data. To address these problems, we propose a novel LLM-based ER model denoted as LLM2ER to serve as a backbone and devise two innovative explainable quality reward models for fine-tuning such a backbone in a reinforcement learning paradigm, ultimately yielding a fine-tuned model denoted as LLM2ER-EQR, which can provide high-quality explanations. LLM2ER-EQR can generate personalized, informative, and consistent high-quality explanations learned from questionable-quality explanation datasets. Extensive experiments conducted on three real-world datasets demonstrate that our model can generate fluent, diverse, informative, and highly personalized explanations.
| Original language | English |
|---|---|
| Pages (from-to) | 9250-9259 |
| Number of pages | 10 |
| Journal | Proceedings of the AAAI Conference on Artificial Intelligence |
| Volume | 38 |
| Issue number | 8 |
| DOIs | |
| Publication status | Published - 2024 |
| Event | AAAI Conference on Artificial Intelligence (38th : 2024) - Vancouver, Canada Duration: 20 Feb 2024 → 27 Feb 2024 |
Fingerprint
Dive into the research topics of 'Fine-tuning large language model based explainable recommendation with explainable quality reward'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver