Cross-modal retrieval: a pairwise classification approach

Aditya Krishna Menon, Didi Surian, Sanjay Chawla

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

Abstract

Content is increasingly available in multiple modalities (such as images, text, and video), each of which provides a different representation of some entity. The cross-modal retrieval problem is: given the representation of an entity in one modality, find its best representation in all other modalities. We propose a novel approach to this problem based on pairwise classification. The approach seamlessly applies to both the settings where ground-truth annotations for the entities are absent and present. In the former case, the approach considers both positive and unlabelled links that arise in standard cross-modal retrieval datasets. Empirical comparisons show improvements over state-of-the-art methods for cross-modal retrieval.
Original languageEnglish
Title of host publicationProceedings of the 2015 SIAM International Conference on Data Mining
EditorsSuresh Venkatasubramanian, Jieping Ye
Place of PublicationPhiladelphia, PA, USA
PublisherSociety for Industrial and Applied Mathematics
Pages199-207
Number of pages9
ISBN (Print)9781611974010
DOIs
Publication statusPublished - 2015
Externally publishedYes
EventSIAM International Conference on Data Mining (15th : 2015) - Vancouver, British Coloumbia, Canada
Duration: 30 Apr 20152 May 2015

Conference

ConferenceSIAM International Conference on Data Mining (15th : 2015)
CityVancouver, British Coloumbia, Canada
Period30/04/152/05/15

Cite this