Abstract
Content is increasingly available in multiple modalities (such as images, text, and video), each of which provides a different representation of some entity. The cross-modal retrieval problem is: given the representation of an entity in one modality, find its best representation in all other modalities. We propose a novel approach to this problem based on pairwise classification. The approach seamlessly applies to both the settings where ground-truth annotations for the entities are absent and present. In the former case, the approach considers both positive and unlabelled links that arise in standard cross-modal retrieval datasets. Empirical comparisons show improvements over state-of-the-art methods for cross-modal retrieval.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2015 SIAM International Conference on Data Mining |
Editors | Suresh Venkatasubramanian, Jieping Ye |
Place of Publication | Philadelphia, PA, USA |
Publisher | Society for Industrial and Applied Mathematics |
Pages | 199-207 |
Number of pages | 9 |
ISBN (Print) | 9781611974010 |
DOIs | |
Publication status | Published - 2015 |
Externally published | Yes |
Event | SIAM International Conference on Data Mining (15th : 2015) - Vancouver, British Coloumbia, Canada Duration: 30 Apr 2015 → 2 May 2015 |
Conference
Conference | SIAM International Conference on Data Mining (15th : 2015) |
---|---|
City | Vancouver, British Coloumbia, Canada |
Period | 30/04/15 → 2/05/15 |