TY - JOUR
T1 - Multi-view tensor graph neural networks through reinforced aggregation
AU - Zhao, Xusheng
AU - Dai, Qiong
AU - Wu, Jia
AU - Peng, Hao
AU - Liu, Mingsheng
AU - Bai, Xu
AU - Tan, Jianlong
AU - Wang, Senzhang
AU - Yu, Philip S.
PY - 2023/4
Y1 - 2023/4
N2 - Graph Neural Networks (GNNs) have yielded fruitful results in learning multi-view graph data. However, it is challenging for existing GNNs to capture the potential correlation information (PCI) among the graph structure features of multiple views. It is also challenging to adaptively identify valuable neighbors for node feature fusion in different views. To this end, we propose a novel Reinforced Tensor Graph Neural Network (RTGNN) framework to more effectively perform multi-view graph representation learning through reinforcing inter- and intra-graph aggregation. Specifically, RTGNN first uses tensor decomposition to extract the graph structure features (GSFs) of each view in the common feature space. These GSFs contain the PCI of multiple views and alleviate fusion conflicts that may be caused by differences between view feature spaces in cross-view feature fusion. Since fusing the features of all neighbor nodes may harm the features of the center node, we filter the irrelevant neighbors to improve the performance of intra-graph aggregation in each view. Concretely, a reinforcement learning (RL)-guided scheme is developed to automatically calculate the optimal filtering threshold for each view, avoiding tedious manual updates and infeasible back propagation updates. Experimental results and analysis on five datasets show that RTGNN surpasses the best multi-view graph representation baselines and achieves the maximum 14.26% performance improvement in terms of F1. The code link is https://github.com/RingBDStack/RTGNN.
AB - Graph Neural Networks (GNNs) have yielded fruitful results in learning multi-view graph data. However, it is challenging for existing GNNs to capture the potential correlation information (PCI) among the graph structure features of multiple views. It is also challenging to adaptively identify valuable neighbors for node feature fusion in different views. To this end, we propose a novel Reinforced Tensor Graph Neural Network (RTGNN) framework to more effectively perform multi-view graph representation learning through reinforcing inter- and intra-graph aggregation. Specifically, RTGNN first uses tensor decomposition to extract the graph structure features (GSFs) of each view in the common feature space. These GSFs contain the PCI of multiple views and alleviate fusion conflicts that may be caused by differences between view feature spaces in cross-view feature fusion. Since fusing the features of all neighbor nodes may harm the features of the center node, we filter the irrelevant neighbors to improve the performance of intra-graph aggregation in each view. Concretely, a reinforcement learning (RL)-guided scheme is developed to automatically calculate the optimal filtering threshold for each view, avoiding tedious manual updates and infeasible back propagation updates. Experimental results and analysis on five datasets show that RTGNN surpasses the best multi-view graph representation baselines and achieves the maximum 14.26% performance improvement in terms of F1. The code link is https://github.com/RingBDStack/RTGNN.
UR - http://www.scopus.com/inward/record.url?scp=85124762683&partnerID=8YFLogxK
UR - http://purl.org/au-research/grants/arc/DE200100964
U2 - 10.1109/TKDE.2022.3142179
DO - 10.1109/TKDE.2022.3142179
M3 - Article
AN - SCOPUS:85124762683
SN - 1041-4347
VL - 35
SP - 4077
EP - 4091
JO - IEEE Transactions on Knowledge and Data Engineering
JF - IEEE Transactions on Knowledge and Data Engineering
IS - 4
ER -