TY - JOUR
T1 - LMACL
T2 - improving graph collaborative filtering with Learnable Model Augmentation Contrastive Learning
AU - Liu, Xinru
AU - Hao, Yongjing
AU - Zhao, Lei
AU - Liu, Guanfeng
AU - Sheng, Victor S.
AU - Zhao, Pengpeng
PY - 2024/8
Y1 - 2024/8
N2 - Graph collaborative filtering (GCF) has achieved exciting recommendation performance with its ability to aggregate high-order graph structure information. Recently, contrastive learning (CL) has been incorporated into GCF to alleviate data sparsity and noise issues. However, most of the existing methods employ random or manual augmentation to produce contrastive views that may destroy the original topology and amplify the noisy effects. We argue that such augmentation is insufficient to produce the optimal contrastive view, leading to suboptimal recommendation results. In this article, we proposed a Learnable Model Augmentation Contrastive Learning (LMACL) framework for recommendation, which effectively combines graph-level and node-level collaborative relations to enhance the expressiveness of collaborative filtering (CF) paradigm. Specifically, we first use the graph convolution network (GCN) as a backbone encoder to incorporate multi-hop neighbors into graph-level original node representations by leveraging the high-order connectivity in user-item interaction graphs. At the same time, we treat the multi-head graph attention network (GAT) as an augmentation view generator to adaptively generate high-quality node-level augmented views. Finally, joint learning endows the end-to-end training fashion. In this case, the mutual supervision and collaborative cooperation of GCN and GAT achieves learnable model augmentation. Extensive experiments on several benchmark datasets demonstrate that LMACL provides a significant improvement over the strongest baseline in terms of Recall and NDCG by 2.5%-3.8% and 1.6%-4.0%, respectively. Our model implementation code is available at https://github.com/LiuHsinx/LMACL.
AB - Graph collaborative filtering (GCF) has achieved exciting recommendation performance with its ability to aggregate high-order graph structure information. Recently, contrastive learning (CL) has been incorporated into GCF to alleviate data sparsity and noise issues. However, most of the existing methods employ random or manual augmentation to produce contrastive views that may destroy the original topology and amplify the noisy effects. We argue that such augmentation is insufficient to produce the optimal contrastive view, leading to suboptimal recommendation results. In this article, we proposed a Learnable Model Augmentation Contrastive Learning (LMACL) framework for recommendation, which effectively combines graph-level and node-level collaborative relations to enhance the expressiveness of collaborative filtering (CF) paradigm. Specifically, we first use the graph convolution network (GCN) as a backbone encoder to incorporate multi-hop neighbors into graph-level original node representations by leveraging the high-order connectivity in user-item interaction graphs. At the same time, we treat the multi-head graph attention network (GAT) as an augmentation view generator to adaptively generate high-quality node-level augmented views. Finally, joint learning endows the end-to-end training fashion. In this case, the mutual supervision and collaborative cooperation of GCN and GAT achieves learnable model augmentation. Extensive experiments on several benchmark datasets demonstrate that LMACL provides a significant improvement over the strongest baseline in terms of Recall and NDCG by 2.5%-3.8% and 1.6%-4.0%, respectively. Our model implementation code is available at https://github.com/LiuHsinx/LMACL.
KW - collaborative filtering
KW - contrastive learning
KW - graph neural network
KW - recommender systems
UR - http://www.scopus.com/inward/record.url?scp=85196748529&partnerID=8YFLogxK
U2 - 10.1145/3657302
DO - 10.1145/3657302
M3 - Article
AN - SCOPUS:85196748529
SN - 1556-4681
VL - 18
SP - 1
EP - 24
JO - ACM Transactions on Knowledge Discovery from Data
JF - ACM Transactions on Knowledge Discovery from Data
IS - 7
M1 - 177
ER -