Abstract
Graph Neural Networks (GNNs) are de facto solutions to structural data learning. However, it is susceptible to low-quality and unreliable structure, which has been a norm rather than an exception in real-world graphs. Existing graph structure learning (GSL) frameworks still lack robustness and interpretability. This paper proposes a general GSL framework, SE-GSL, through structural entropy and the graph hierarchy abstracted in the encoding tree. Particularly, we exploit the one-dimensional structural entropy to maximize embedded information content when auxiliary neighbourhood attributes is fused to enhance the original graph. A new scheme of constructing optimal encoding trees are proposed to minimize the uncertainty and noises in the graph whilst assuring proper community partition in hierarchical abstraction. We present a novel sample-based mechanism for restoring the graph structure via node structural entropy distribution. It increases the connectivity among nodes with larger uncertainty in lower-level communities. SE-GSL is compatible with various GNN models and enhances the robustness towards noisy and heterophily structures. Extensive experiments show significant improvements in the effectiveness and robustness of structure learning and node representation learning.
Original language | English |
---|---|
Title of host publication | The ACM Web Conference 2023 |
Subtitle of host publication | proceedings of The World Wide Web Conference WWW 2023 |
Place of Publication | New York |
Publisher | Association for Computing Machinery |
Pages | 499-510 |
Number of pages | 12 |
ISBN (Electronic) | 9781450394161 |
DOIs | |
Publication status | Published - 2023 |
Event | 2023 World Wide Web Conference, WWW 2023 - Austin, United States Duration: 30 Apr 2023 → 4 May 2023 |
Conference
Conference | 2023 World Wide Web Conference, WWW 2023 |
---|---|
Country/Territory | United States |
City | Austin |
Period | 30/04/23 → 4/05/23 |
Keywords
- Graph structure learning
- structural entropy
- graph neural network