Abstract
Developmental robotics seeks to build robots that learn to interact with the environment largely autonomously. These robots can calibrate their sensorimotor competencies on their own, much like developing children. In this paper, we build a developmental model of image stabilization based on the active efficient coding (AEC) framework and apply the model to a real robotic platform. In the visual system of primates, the optokinetic response (OKR) and the vestibulo-ocular reflex (VOR) cooperate to ensure image stabilization during relative motion between the observer and the environment. Inspired by these biological findings, our model integrates visual, inertial and motor encoder sensory cues. The sensory processing and the motor policy co-develop. The visual processing is based on a sparse coding algorithm. Motor behavior is learned using reinforcement learning. Our results show that the stabilization performance is improved by integrating visual and inertial inputs. Importantly, the weighting between the two inputs is learned automatically as the robot interacts with the environment.
| Original language | English |
|---|---|
| Title of host publication | ICRA 2017 - IEEE International Conference on Robotics and Automation |
| Publisher | Institute of Electrical and Electronics Engineers Inc. |
| Pages | 3546-3551 |
| Number of pages | 6 |
| ISBN (Electronic) | 9781509046331 |
| DOIs | |
| Publication status | Published - 21 Jul 2017 |
| Event | 2017 IEEE International Conference on Robotics and Automation, ICRA 2017 - Singapore, Singapore Duration: 29 May 2017 → 3 Jun 2017 |
Publication series
| Name | Proceedings - IEEE International Conference on Robotics and Automation |
|---|---|
| ISSN (Print) | 1050-4729 |
Conference
| Conference | 2017 IEEE International Conference on Robotics and Automation, ICRA 2017 |
|---|---|
| Country/Territory | Singapore |
| City | Singapore |
| Period | 29/05/17 → 3/06/17 |
Bibliographical note
Publisher Copyright:© 2017 IEEE.