Object-and-action aware model for visual language navigation

Yuankai Qi, Zizheng Pan, Shengping Zhang, Anton van den Hengel, Qi Wu*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

53 Citations (Scopus)

Abstract

Vision-and-Language Navigation (VLN) is unique in that it requires turning relatively general natural-language instructions into robot agent actions, on the basis of visible environments. This requires to extract value from two very different types of natural-language information. The first is object description (e.g., ‘table’, ‘door’), each presenting as a tip for the agent to determine the next action by finding the item visible in the environment, and the second is action specification (e.g., ‘go straight’, ‘turn left’) which allows the robot to directly predict the next movements without relying on visual perceptions. However, most existing methods pay few attention to distinguish these information from each other during instruction encoding and mix together the matching between textual object/action encoding and visual perception/orientation features of candidate viewpoints. In this paper, we propose an Object-and-Action Aware Model (OAAM) that processes these two different forms of natural language based instruction separately. This enables each process to match object-centered/action-centered instruction to their own counterpart visual perception/action orientation flexibly. However, one side-issue caused by above solution is that an object mentioned in instructions may be observed in the direction of two or more candidate viewpoints, thus the OAAM may not predict the viewpoint on the shortest path as the next action. To handle this problem, we design a simple but effective path loss to penalize trajectories deviating from the ground truth path. Experimental results demonstrate the effectiveness of the proposed model and path loss, and the superiority of their combination with a 50 % SPL score on the R2R dataset and a 40 % CLS score on the R4R dataset in unseen environments, outperforming the previous state-of-the-art.

Original languageEnglish
Title of host publicationComputer Vision – ECCV 2020
Subtitle of host publication16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part X
EditorsAndrea Vedaldi, Horst Bischof, Thomas Brox, Jan-Michael Frahm
Place of PublicationCham
PublisherSpringer, Springer Nature
Pages303-317
Number of pages15
ISBN (Electronic)9783030586072
ISBN (Print)9783030586065
DOIs
Publication statusPublished - 2020
Externally publishedYes
Event16th European Conference on Computer Vision, ECCV 2020 - Glasgow, United Kingdom
Duration: 23 Aug 202028 Aug 2020

Publication series

NameLecture Notes in Computer Science
Volume12355
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference16th European Conference on Computer Vision, ECCV 2020
Country/TerritoryUnited Kingdom
CityGlasgow
Period23/08/2028/08/20

Keywords

  • Vision-and-Language Navigation
  • Modular network
  • Reward shaping

Fingerprint

Dive into the research topics of 'Object-and-action aware model for visual language navigation'. Together they form a unique fingerprint.

Cite this