TY - JOUR
T1 - Feature-aware contrastive learning with bidirectional transformers for sequential recommendation
AU - Du, Hanwen
AU - Yuan, Huanhuan
AU - Zhao, Pengpeng
AU - Wang, Deqing
AU - Sheng, Victor S.
AU - Liu, Yanchi
AU - Liu, Guanfeng
AU - Zhao, Lei
PY - 2024/12
Y1 - 2024/12
N2 - Contrastive learning with Transformer-based sequence encoder has gained predominance for sequential recommendation due to its ability to mitigate the data noise and the data sparsity issue. However, existing contrastive learning approaches for sequential recommendation still suffer from two limitations. First, they mainly center on left-to-right unidirectional Transformers as base encoders, which are suboptimal for sequential recommendation because user behaviors may not be a rigid left-to-right sequence. Second, they devise contrastive learning objectives only from the sequence level, neglecting the rich self-supervision signals from the feature level. To address these limitations, we propose a novel framework called Feature-aware Contrastive Learning with bidirectional Transformers for sequential Recommendation (FCLRec) to effectively leverage feature information for sequential recommendation. Specifically, we first augment bidirectional Transformers with a novel feature-aware self-attention module that is able to simultaneously model the complex relationships between sequences and features. Next, we propose a novel feature-aware contrastive learning objective that generates a collection of positive samples via three types of augmentations from three different levels. Finally, we adopt feature prediction as an auxiliary task to strengthen the connections between items and features. Our experimental results on four public benchmark datasets show that FCLRec outperforms the state-of-the-art methods for sequential recommendation.
AB - Contrastive learning with Transformer-based sequence encoder has gained predominance for sequential recommendation due to its ability to mitigate the data noise and the data sparsity issue. However, existing contrastive learning approaches for sequential recommendation still suffer from two limitations. First, they mainly center on left-to-right unidirectional Transformers as base encoders, which are suboptimal for sequential recommendation because user behaviors may not be a rigid left-to-right sequence. Second, they devise contrastive learning objectives only from the sequence level, neglecting the rich self-supervision signals from the feature level. To address these limitations, we propose a novel framework called Feature-aware Contrastive Learning with bidirectional Transformers for sequential Recommendation (FCLRec) to effectively leverage feature information for sequential recommendation. Specifically, we first augment bidirectional Transformers with a novel feature-aware self-attention module that is able to simultaneously model the complex relationships between sequences and features. Next, we propose a novel feature-aware contrastive learning objective that generates a collection of positive samples via three types of augmentations from three different levels. Finally, we adopt feature prediction as an auxiliary task to strengthen the connections between items and features. Our experimental results on four public benchmark datasets show that FCLRec outperforms the state-of-the-art methods for sequential recommendation.
KW - feature modeling
KW - self-supervised learning
KW - sequential recommendation
UR - http://www.scopus.com/inward/record.url?scp=85181563688&partnerID=8YFLogxK
U2 - 10.1109/TKDE.2023.3343345
DO - 10.1109/TKDE.2023.3343345
M3 - Article
AN - SCOPUS:85181563688
SN - 1041-4347
VL - 36
SP - 8192
EP - 8205
JO - IEEE Transactions on Knowledge and Data Engineering
JF - IEEE Transactions on Knowledge and Data Engineering
IS - 12
ER -