Skip to main navigation Skip to search Skip to main content

360DVO: Deep Visual Odometry for Monocular 360-Degree Camera

Xiaopeng Guo, Yinzhe Xu, Huajian Huang*, Sai Kit Yeung

*Corresponding author for this work

Research output: Contribution to journalJournal Articlepeer-review

Abstract

Monocular omnidirectional visual odometry (OVO) systems leverage 360-degree cameras to overcome field-of-view limitations of perspective VO systems. However, existing methods, reliant on handcrafted features or photometric objectives, often lack robustness in challenging scenarios, such as aggressive motion and varying illumination. To address this, we present 360DVO, the first deep learning-based OVO framework. Our approach introduces a distortion-aware spherical feature extractor (DAS-Feat) that adaptively learns distortion-resistant features from 360-degree images. These sparse feature patches are then used to establish constraints for effective pose estimation within a novel omnidirectional differentiable bundle adjustment (ODBA) module. To facilitate evaluation in realistic settings, we also contribute a new real-world OVO benchmark. Extensive experiments on this benchmark and public synthetic datasets (TartanAir V2 and 360VO) demonstrate that 360DVO surpasses state-of-the-art baselines (including 360VO and OpenVSLAM), improving robustness by 50% and accuracy by 37.5%.

Original languageEnglish
Article number11358682
Pages (from-to)3079-3086
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume11
Issue number3
Early online date19 Jan 2026
DOIs
Publication statusPublished - Mar 2026

Bibliographical note

Publisher Copyright:
© 2016 IEEE.

Keywords

  • Visual odometry
  • omnidirectional vision

Fingerprint

Dive into the research topics of '360DVO: Deep Visual Odometry for Monocular 360-Degree Camera'. Together they form a unique fingerprint.

Cite this