Abstract
Unmanned vehicles usually rely on Global Positioning System (GPS) and Light Detection and Ranging (LiDAR) sensors to achieve high-precision localization results for navigation purpose. However, this combination with their associated costs and infrastructure demands, poses challenges for widespread adoption in mass-market applications. In this paper, we aim to use only a monocular camera to achieve comparable onboard localization performance by tracking deep-learning visual features on a LiDAR-enhanced visual prior map. Experiments show that the proposed algorithm can provide centimeter-level global positioning results with scale, which is effortlessly integrated and favorable for low-cost robot system deployment in real-world applications.
| Original language | English |
|---|---|
| Title of host publication | 2024 IEEE International Conference on Robotics and Automation, ICRA 2024 |
| Publisher | Institute of Electrical and Electronics Engineers Inc. |
| Pages | 11934-11940 |
| Number of pages | 7 |
| ISBN (Electronic) | 9798350384574 |
| DOIs | |
| Publication status | Published - 2024 |
| Externally published | Yes |
| Event | 2024 IEEE International Conference on Robotics and Automation, ICRA 2024 - Yokohama, Japan Duration: 13 May 2024 → 17 May 2024 |
Publication series
| Name | Proceedings - IEEE International Conference on Robotics and Automation |
|---|---|
| ISSN (Print) | 1050-4729 |
Conference
| Conference | 2024 IEEE International Conference on Robotics and Automation, ICRA 2024 |
|---|---|
| Country/Territory | Japan |
| City | Yokohama |
| Period | 13/05/24 → 17/05/24 |
Bibliographical note
Publisher Copyright:© 2024 IEEE.
Keywords
- Localization
- Robotics in Under-Resourced Settings
- Sensor Fusion