Abstract
We study active object tracking, where a tracker takes as input the visual observation (i.e., frame sequence) and produces the camera control signal (e.g., move forward, turn left, etc.). Conventional methods tackle the tracking and the camera control separately, which is challenging to tune jointly. It also incurs many human efforts for labeling and many expensive trial-and-errors in real- world. To address these issuer we propose, in this paper, an end-to-end solution via deep reinforcement learning, where a ConvNet-LSTM function approximater is adopted for the direct frame-to- action prediction. We further propose an environment augmentation technique and a customized reward function, which are crucial for a successful training. The tracker trained in simulators (ViZ- Doom, Unreal Engine) shows good generalization in the case of unseen object moving path, unseen object appearance, unseen background, and distracting object. It can restore tracking when occasionally losing the target. With the experiments over the VOT dataset, we also find that die tracking ability, obtained solely from simulators, can potentially transfer to real-world scenarios.
| Original language | English |
|---|---|
| Title of host publication | 35th International Conference on Machine Learning, ICML 2018 |
| Editors | Jennifer Dy, Andreas Krause |
| Publisher | International Machine Learning Society (IMLS) |
| Pages | 5191-5200 |
| Number of pages | 10 |
| ISBN (Electronic) | 9781510867963 |
| Publication status | Published - 2018 |
| Externally published | Yes |
| Event | 35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden Duration: 10 Jul 2018 → 15 Jul 2018 |
Publication series
| Name | 35th International Conference on Machine Learning, ICML 2018 |
|---|---|
| Volume | 7 |
Conference
| Conference | 35th International Conference on Machine Learning, ICML 2018 |
|---|---|
| Country/Territory | Sweden |
| City | Stockholm |
| Period | 10/07/18 → 15/07/18 |
Bibliographical note
Publisher Copyright:© Copyright 2018 by the author(s).