TY - JOUR
T1 - Self-supervised lie algebra representation learning via optimal canonical metric
AU - Yu, Xiaohan
AU - Pan, Zicheng
AU - Zhao, Yang
AU - Gao, Yongsheng
PY - 2024/2/8
Y1 - 2024/2/8
N2 - Learning discriminative representation with limited training samples is emerging as an important yet challenging visual categorization task. While prior work has shown that incorporating self-supervised learning can improve performance, we found that the direct use of canonical metric in a Lie group is theoretically incorrect. In this article, we prove that a valid optimization measurement should be a canonical metric on Lie algebra. Based on the theoretical finding, this article introduces a novel self-supervised Lie algebra network (SLA-Net) representation learning framework. Via minimizing canonical metric distance between target and predicted Lie algebra representation within a computationally convenient vector space, SLA-Net avoids computing nontrivial geodesic (locally length-minimizing curve) metric on a manifold (curved space). By simultaneously optimizing a single set of parameters shared by self-supervised learning and supervised classification, the proposed SLA-Net gains improved generalization capability. Comprehensive evaluation results on eight public datasets show the effectiveness of SLA-Net for visual categorization with limited samples.
AB - Learning discriminative representation with limited training samples is emerging as an important yet challenging visual categorization task. While prior work has shown that incorporating self-supervised learning can improve performance, we found that the direct use of canonical metric in a Lie group is theoretically incorrect. In this article, we prove that a valid optimization measurement should be a canonical metric on Lie algebra. Based on the theoretical finding, this article introduces a novel self-supervised Lie algebra network (SLA-Net) representation learning framework. Via minimizing canonical metric distance between target and predicted Lie algebra representation within a computationally convenient vector space, SLA-Net avoids computing nontrivial geodesic (locally length-minimizing curve) metric on a manifold (curved space). By simultaneously optimizing a single set of parameters shared by self-supervised learning and supervised classification, the proposed SLA-Net gains improved generalization capability. Comprehensive evaluation results on eight public datasets show the effectiveness of SLA-Net for visual categorization with limited samples.
KW - Algebra
KW - Extraterrestrial measurements
KW - Lie algebra
KW - Lie group
KW - Manifolds
KW - Measurement
KW - optimal canonical metric
KW - Representation learning
KW - self-supervised learning
KW - Task analysis
KW - Visualization
UR - http://www.scopus.com/inward/record.url?scp=85187298739&partnerID=8YFLogxK
UR - http://purl.org/au-research/grants/arc/DP180100958
UR - http://purl.org/au-research/grants/arc/IH180100002
U2 - 10.1109/TNNLS.2024.3355492
DO - 10.1109/TNNLS.2024.3355492
M3 - Article
C2 - 38329862
AN - SCOPUS:85187298739
SN - 2162-237X
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
ER -