Abstract
During their first months of life, infants learn to coordinate their perceptions and actions across different modalities. For example, eye-hand coordination relies on combining visual and proprioceptive sensory inputs for controlling eye and hand movements. What drives the development and calibration of such coordination? Here, we put forward a multimodal hierarchical extension of the Active Efficient Coding framework to learn a simple form of eye-hand coordination. By learning to actively compress visual and proprioceptive inputs into a combined multimodal representation, our embodied infant model learns to make eye movements to track an object held in its hand. We find that the abstract multimodal representation improves the tracking accuracy, but only if it emerges after the establishment of the single-modality systems. This suggests the existence of a 'less-is-more' effect for the development of coordinated multimodal sensorimotor behaviors.
| Original language | English |
|---|---|
| Title of host publication | 2023 IEEE International Conference on Development and Learning, ICDL 2023 |
| Publisher | Institute of Electrical and Electronics Engineers Inc. |
| Pages | 437-442 |
| Number of pages | 6 |
| ISBN (Electronic) | 9781665470759 |
| DOIs | |
| Publication status | Published - 2023 |
| Event | 2023 IEEE International Conference on Development and Learning, ICDL 2023 - Macau, China Duration: 9 Nov 2023 → 11 Nov 2023 |
Publication series
| Name | 2023 IEEE International Conference on Development and Learning, ICDL 2023 |
|---|
Conference
| Conference | 2023 IEEE International Conference on Development and Learning, ICDL 2023 |
|---|---|
| Country/Territory | China |
| City | Macau |
| Period | 9/11/23 → 11/11/23 |
Bibliographical note
Publisher Copyright:© 2023 IEEE.
Keywords
- Active perception
- Eye-hand coordination
- Less-is-more
- Multimodality
- Sensorimotor development