Self-supervised lie algebra representation learning via optimal canonical metric

Xiaohan Yu, Zicheng Pan, Yang Zhao, Yongsheng Gao

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Learning discriminative representation with limited training samples is emerging as an important yet challenging visual categorization task. While prior work has shown that incorporating self-supervised learning can improve performance, we found that the direct use of canonical metric in a Lie group is theoretically incorrect. In this article, we prove that a valid optimization measurement should be a canonical metric on Lie algebra. Based on the theoretical finding, this article introduces a novel self-supervised Lie algebra network (SLA-Net) representation learning framework. Via minimizing canonical metric distance between target and predicted Lie algebra representation within a computationally convenient vector space, SLA-Net avoids computing nontrivial geodesic (locally length-minimizing curve) metric on a manifold (curved space). By simultaneously optimizing a single set of parameters shared by self-supervised learning and supervised classification, the proposed SLA-Net gains improved generalization capability. Comprehensive evaluation results on eight public datasets show the effectiveness of SLA-Net for visual categorization with limited samples.

Original languageEnglish
Number of pages12
JournalIEEE Transactions on Neural Networks and Learning Systems
DOIs
Publication statusE-pub ahead of print - 8 Feb 2024

Keywords

  • Algebra
  • Extraterrestrial measurements
  • Lie algebra
  • Lie group
  • Manifolds
  • Measurement
  • optimal canonical metric
  • Representation learning
  • self-supervised learning
  • Task analysis
  • Visualization

Cite this