Self-supervised lie algebra representation learning via optimal canonical metric

Xiaohan Yu, Zicheng Pan, Yang Zhao, Yongsheng Gao*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Learning discriminative representation with limited training samples is emerging as an important yet challenging visual categorization task. While prior work has shown that incorporating self-supervised learning can improve performance, we found that the direct use of canonical metric in a Lie group is theoretically incorrect. In this article, we prove that a valid optimization measurement should be a canonical metric on Lie algebra. Based on the theoretical finding, this article introduces a novel self-supervised Lie algebra network (SLA-Net) representation learning framework. Via minimizing canonical metric distance between target and predicted Lie algebra representation within a computationally convenient vector space, SLA-Net avoids computing nontrivial geodesic (locally length-minimizing curve) metric on a manifold (curved space). By simultaneously optimizing a single set of parameters shared by self-supervised learning and supervised classification, the proposed SLA-Net gains improved generalization capability. Comprehensive evaluation results on eight public datasets show the effectiveness of SLA-Net for visual categorization with limited samples.

Original languageEnglish
Pages (from-to)3547-3558
Number of pages12
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume36
Issue number2
Early online date8 Feb 2024
DOIs
Publication statusPublished - Feb 2025

Fingerprint

Dive into the research topics of 'Self-supervised lie algebra representation learning via optimal canonical metric'. Together they form a unique fingerprint.

Cite this