Multiple-object tracking based on monocular camera and 3-D lidar fusion for autonomous vehicles

Hao Chen, Chunyue Xue, Shoubin Liu, Yuxiang Sun, Yongquan Chen*

*Corresponding author for this work

Research output: Chapter in Book/Conference Proceeding/ReportConference Paper published in a bookpeer-review

Abstract

This article describes a multi-object tracking method through sensor fusion with a monocular camera and a 3-D Lidar for autonomous vehicles. Specifically, several pairwise costs from information, such as locations, movements, and poses of 3-D cues, are designed for tracking. These costs can complement each other to reduce matching errors during the tracking process. Moreover, they are efficient to be on-line computed with embedded equipment. We feed the pairwise costs to the data-association framework, which is based on the Hungarian algorithm, and then do the back-end fusion for the tracking results. The experimental results on our autonomous sightseeing car demonstrate that our tracking method could achieve accurate and robust results in real-world traffic scenarios.

Original languageEnglish
Title of host publicationIEEE International Conference on Robotics and Biomimetics, ROBIO 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages456-460
Number of pages5
ISBN (Electronic)9781728163215
DOIs
Publication statusPublished - Dec 2019
Externally publishedYes
Event2019 IEEE International Conference on Robotics and Biomimetics, ROBIO 2019 - Dali, China
Duration: 6 Dec 20198 Dec 2019

Publication series

NameIEEE International Conference on Robotics and Biomimetics, ROBIO 2019

Conference

Conference2019 IEEE International Conference on Robotics and Biomimetics, ROBIO 2019
Country/TerritoryChina
CityDali
Period6/12/198/12/19

Bibliographical note

Publisher Copyright:
© 2019 IEEE.

Fingerprint

Dive into the research topics of 'Multiple-object tracking based on monocular camera and 3-D lidar fusion for autonomous vehicles'. Together they form a unique fingerprint.

Cite this