Abstract
Feature selection is an effective way to reduce computational cost and improve feature quality for the large-scale multimedia analysis system. In this paper, we propose a novel feature selection method in which the hinge loss function with a ℓ2,1regularization term is used to learn a sparse feature selection matrix for each learning task. Meanwhile, shared information exploiting across multiple tasks has been also taken into account by imposing a constraint which globally limits the combined feature selection matrices to be low-rank. A convex optimization method is proposed to use in the framework by minimizing the trace norm of a matrix instead of minimizing the rank of a matrix directly. Afterwards, gradient descent is applied to find the global optimum. Extensive experiments have been conducted across eight datasets for different multimedia applications, including action recognition, face recognition, object recognition and scene recognition. Experimental results demonstrate that the proposed method performs better than other compared approaches. Especially, when the shared information across multiple tasks is very beneficial to the multi-task learning, obvious improvements can be observed.
Original language | English |
---|---|
Pages (from-to) | 746-753 |
Number of pages | 8 |
Journal | Signal Processing |
Volume | 120 |
DOIs | |
Publication status | Published - 1 Mar 2016 |
Externally published | Yes |
Keywords
- Feature selection
- Multi-task learning
- Trace norm
- Low-rank