A hybrid compact neural architecture for visual place recognition

Marvin Chancan*, Luis Hernandez-Nunez, Ajay Narendra, Andrew B. Barron, Michael Milford

*Corresponding author for this work

Research output: Contribution to journalArticle

Abstract

State-of-The-Art algorithms for visual place recognition, and related visual navigation systems, can be broadly split into two categories: computer-science-oriented models including deep learning or image retrieval-based techniques with minimal biological plausibility, and neuroscience-oriented dynamical networks that model temporal properties underlying spatial navigation in the brain. In this letter, we propose a new compact and high-performing place recognition model that bridges this divide for the first time. Our approach comprises two key neural models of these categories: (1) FlyNet, a compact, sparse two-layer neural network inspired by brain architectures of fruit flies, Drosophila melanogaster, and (2) a one-dimensional continuous attractor neural network (CANN). The resulting FlyNet+CANN network incorporates the compact pattern recognition capabilities of our FlyNet model with the powerful temporal filtering capabilities of an equally compact CANN, replicating entirely in a hybrid neural implementation the functionality that yields high performance in algorithmic localization approaches like SeqSLAM. We evaluate our model, and compare it to three state-of-The-Art methods, on two benchmark real-world datasets with small viewpoint variations and extreme environmental changes-achieving 87% AUC results under day to night transitions compared to 60% for Multi-Process Fusion, 46% for LoST-X and 1% for SeqSLAM, while being 6.5, 310, and 1.5 times faster, respectively.

Original languageEnglish
Pages (from-to)993-1000
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume5
Issue number2
DOIs
Publication statusPublished - Apr 2020

    Fingerprint

Keywords

  • Biomimetics
  • localization
  • visual-based navigation

Cite this