TY - GEN
T1 - Contrastive learning with bidirectional transformers for sequential recommendation
AU - Du, Hanwen
AU - Shi, Hui
AU - Zhao, Pengpeng
AU - Wang, Deqing
AU - Sheng, Victor S.
AU - Liu, Yanchi
AU - Liu, Guanfeng
AU - Zhao, Lei
PY - 2022/10
Y1 - 2022/10
N2 - Contrastive learning with Transformer-based sequence encoder has gained predominance for sequential recommendation. It maximizes the agreements between paired sequence augmentations that share similar semantics. However, existing contrastive learning approaches in sequential recommendation mainly center upon left-to-right unidirectional Transformers as base encoders, which are suboptimal for sequential recommendation because user behaviors may not be a rigid left-to-right sequence. To tackle that, we propose a novel framework named Contrastive learning with Bidirectional Transformers for sequential recommendation (CBiT). Specifically, we first apply the slide window technique for long user sequences in bidirectional Transformers, which allows for a more fine-grained division of user sequences. Then we combine the cloze task mask and the dropout mask to generate high-quality positive samples and perform multi-pair contrastive learning, which demonstrates better performance and adaptability compared with the normal one-pair contrastive learning. Moreover, we introduce a novel dynamic loss reweighting strategy to balance between the cloze task loss and the contrastive loss. Experiment results on three public benchmark datasets show that our model outperforms state-of-the-art models for sequential recommendation. Our code is available at this link: https://github.com/hw-du/CBiT/tree/master.
AB - Contrastive learning with Transformer-based sequence encoder has gained predominance for sequential recommendation. It maximizes the agreements between paired sequence augmentations that share similar semantics. However, existing contrastive learning approaches in sequential recommendation mainly center upon left-to-right unidirectional Transformers as base encoders, which are suboptimal for sequential recommendation because user behaviors may not be a rigid left-to-right sequence. To tackle that, we propose a novel framework named Contrastive learning with Bidirectional Transformers for sequential recommendation (CBiT). Specifically, we first apply the slide window technique for long user sequences in bidirectional Transformers, which allows for a more fine-grained division of user sequences. Then we combine the cloze task mask and the dropout mask to generate high-quality positive samples and perform multi-pair contrastive learning, which demonstrates better performance and adaptability compared with the normal one-pair contrastive learning. Moreover, we introduce a novel dynamic loss reweighting strategy to balance between the cloze task loss and the contrastive loss. Experiment results on three public benchmark datasets show that our model outperforms state-of-the-art models for sequential recommendation. Our code is available at this link: https://github.com/hw-du/CBiT/tree/master.
KW - bidirectional sequential model
KW - contrastive learning
KW - sequential recommendation
UR - http://www.scopus.com/inward/record.url?scp=85140840054&partnerID=8YFLogxK
U2 - 10.1145/3511808.3557266
DO - 10.1145/3511808.3557266
M3 - Conference proceeding contribution
AN - SCOPUS:85140840054
T3 - International Conference on Information and Knowledge Management, Proceedings
SP - 396
EP - 405
BT - CIKM 2022 - Proceedings of the 31st ACM International Conference on Information and Knowledge Management
PB - Association for Computing Machinery
CY - New York, NY
T2 - 31st ACM International Conference on Information and Knowledge Management, CIKM 2022
Y2 - 17 October 2022 through 21 October 2022
ER -