Feature-level deeper self-attention network with contrastive learning for sequential recommendation

Yongjing Hao, Tingting Zhang, Pengpeng Zhao*, Yanchi Liu, Victor S. Sheng, Jiajie Xu, Guanfeng Liu, Xiaofang Zhou

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Sequential recommendation, which aims to recommend next item that the user will likely interact in a near future, has become essential in various Internet applications. Existing methods usually consider the transition patterns between items, but ignore the transition patterns between features of items. We argue that only the item-level sequences cannot reveal the full sequential patterns, while explicit and implicit feature-level sequences can help extract the full sequential patterns. Meanwhile, the item-level sequential recommendation also suffers from limited supervised signal issues. In this article, we propose a novel model Feature-level Deeper Self-Attention Network with Contrastive Learning (FDSA-CL) for sequential recommendation. Specifically, FDSA-CL first integrates various heterogeneous features of items into feature-level sequences with different weights through a vanilla attention mechanism. After that, FDSA-CL applies separated self-attention blocks on item-level sequences and feature-level sequences, respectively, to model item transition patterns and feature transition patterns. Moreover, we propose contrastive learning and item feature recommendation tasks to capture the embedding commonality and further utilize the beneficial interaction among the two levels, so as to alleviate the sparsity of the supervised signal and extract the most critical information. Finally, we jointly optimize the above tasks. We evaluate the proposed model using two real-world datasets and experimental results show that our model significantly outperforms the state-of-the-art approaches.

Original languageEnglish
Pages (from-to)10112-10124
Number of pages13
JournalIEEE Transactions on Knowledge and Data Engineering
Volume35
Issue number10
DOIs
Publication statusPublished - Oct 2023

Fingerprint

Dive into the research topics of 'Feature-level deeper self-attention network with contrastive learning for sequential recommendation'. Together they form a unique fingerprint.

Cite this