TY - JOUR
T1 - Signed network representation by preserving multi-order signed proximity
AU - Xu, Pinghua
AU - Hu, Wenbin
AU - Wu, Jia
AU - Liu, Weiwei
AU - Yang, Yang
AU - Yu, Philip S.
PY - 2023/3
Y1 - 2023/3
N2 - Signed network representation is a key problem for signed network data. Previous studies have shown that by preserving multi-order signed proximity (SP), expressive node representations can be learned. However, multi-order SP cannot be perfectly encoded using limited samples extracted from random walks, which reduces effectiveness. To perfectly encode multi-order SP, we have innovatively integrated the informativeness of infinite samples to construct high-level summaries of multi-order SP without explicit sampling. Based on these summaries, we propose a method called SPMF, in which node representations are obtained using low-rank matrix approximation. Furthermore, we theoretically investigate the rationality of SPMF by examining its relationship with a powerful representation learning architecture. In sign inference and link prediction tasks with several real-world datasets, SPMF is empirically competitive compared with state-of-the-art methods. Additionally, two tricks are designed for improving the scalability of SPMF. One trick aims to filter out less informative summaries, and another one is inspired by kernel techniques. Both tricks empirically improve scalability while preserving effective performance. The code for our methods is publicly available.
AB - Signed network representation is a key problem for signed network data. Previous studies have shown that by preserving multi-order signed proximity (SP), expressive node representations can be learned. However, multi-order SP cannot be perfectly encoded using limited samples extracted from random walks, which reduces effectiveness. To perfectly encode multi-order SP, we have innovatively integrated the informativeness of infinite samples to construct high-level summaries of multi-order SP without explicit sampling. Based on these summaries, we propose a method called SPMF, in which node representations are obtained using low-rank matrix approximation. Furthermore, we theoretically investigate the rationality of SPMF by examining its relationship with a powerful representation learning architecture. In sign inference and link prediction tasks with several real-world datasets, SPMF is empirically competitive compared with state-of-the-art methods. Additionally, two tricks are designed for improving the scalability of SPMF. One trick aims to filter out less informative summaries, and another one is inspired by kernel techniques. Both tricks empirically improve scalability while preserving effective performance. The code for our methods is publicly available.
UR - http://www.scopus.com/inward/record.url?scp=85118653185&partnerID=8YFLogxK
UR - http://purl.org/au-research/grants/arc/DE200100964
U2 - 10.1109/TKDE.2021.3125148
DO - 10.1109/TKDE.2021.3125148
M3 - Article
AN - SCOPUS:85118653185
SN - 1041-4347
VL - 35
SP - 3087
EP - 3100
JO - IEEE Transactions on Knowledge and Data Engineering
JF - IEEE Transactions on Knowledge and Data Engineering
IS - 3
ER -