TY - GEN
T1 - Meta-optimized joint generative and contrastive learning for sequential recommendation
AU - Hao, Yongjing
AU - Zhao, Pengpeng
AU - Fang, Junhua
AU - Qu, Jianfeng
AU - Liu, Guanfeng
AU - Zhuang, Fuzhen
AU - Sheng, Victor S.
AU - Zhou, Xiaofang
PY - 2024
Y1 - 2024
N2 - Sequential Recommendation (SR) has received increasing attention due to its ability to capture user dynamic preferences. Recently, Contrastive Learning (CL) provides an effective approach for sequential recommendation by learning invariance from different views of an input. However, most existing data or model augmentation methods may destroy semantic sequential interaction characteristics and often rely on the hand-crafted property of their contrastive view-generation strategies. In this paper, we propose a Meta-optimized Seq2Seq Generator and Contrastive Learning (Meta-SGCL) for sequential recommendation, which applies the meta-optimized two-step training strategy to adaptive generate contrastive views. Specifically, Meta-SGCL first introduces a simple yet effective augmentation method called Sequence-to-Sequence (Seq2Seq) generator, which treats the Variational AutoEncoders (VAE) as the view generator and can constitute contrastive views while preserving the original sequence's semantics. Next, the model employs a meta-optimized two-step training strategy, which aims to adaptively generate contrastive views without relying on manually designed view-generation techniques. Finally, we evaluate our proposed method Meta-SGCL using three public real-world datasets. Compared with the state-of-the-art methods, our experimental results demonstrate the effectiveness of our model and the code is available.11https.//anonymous.4open.science/status/Meta-SGCL-05B5
AB - Sequential Recommendation (SR) has received increasing attention due to its ability to capture user dynamic preferences. Recently, Contrastive Learning (CL) provides an effective approach for sequential recommendation by learning invariance from different views of an input. However, most existing data or model augmentation methods may destroy semantic sequential interaction characteristics and often rely on the hand-crafted property of their contrastive view-generation strategies. In this paper, we propose a Meta-optimized Seq2Seq Generator and Contrastive Learning (Meta-SGCL) for sequential recommendation, which applies the meta-optimized two-step training strategy to adaptive generate contrastive views. Specifically, Meta-SGCL first introduces a simple yet effective augmentation method called Sequence-to-Sequence (Seq2Seq) generator, which treats the Variational AutoEncoders (VAE) as the view generator and can constitute contrastive views while preserving the original sequence's semantics. Next, the model employs a meta-optimized two-step training strategy, which aims to adaptively generate contrastive views without relying on manually designed view-generation techniques. Finally, we evaluate our proposed method Meta-SGCL using three public real-world datasets. Compared with the state-of-the-art methods, our experimental results demonstrate the effectiveness of our model and the code is available.11https.//anonymous.4open.science/status/Meta-SGCL-05B5
KW - contrastive learning
KW - meta-optimized
KW - Seq2Seq Generator
KW - Sequential Recommendation
UR - http://www.scopus.com/inward/record.url?scp=85200451161&partnerID=8YFLogxK
U2 - 10.1109/ICDE60146.2024.00060
DO - 10.1109/ICDE60146.2024.00060
M3 - Conference proceeding contribution
AN - SCOPUS:85200451161
T3 - Proceedings - International Conference on Data Engineering
SP - 705
EP - 718
BT - ICDE 2024
PB - Institute of Electrical and Electronics Engineers (IEEE)
CY - Piscataway, NJ
T2 - IEEE International Conference on Data Engineering (40th : 2024)
Y2 - 13 May 2024 through 17 May 2024
ER -