Active visual tracking of heading direction by combining motion energy neurons

Stanley Y.M. Lam*, Bertram E. Shi

*Corresponding author for this work

Research output: Contribution to journalConference article published in journalpeer-review

1 Citation (Scopus)

Abstract

We describe a robotic vision system that aligns a camera's optical axis with its direction of translation by estimating the focus of expansion. Visual processing is based on functional models of populations of neurons in cortical areas V1 through MST. Populations of motion energy neurons tuned to different orientations, positions and directions of motion are successively transformed into a population of neurons that collectively encode the focus of expansion at 25 frames per second. We characterize the performance of the system translating through a cluttered environment, and show that the performance is robust to variations in system parameters.

Original languageEnglish
Article number4253352
Pages (from-to)3171-3174
Number of pages4
JournalProceedings - IEEE International Symposium on Circuits and Systems
DOIs
Publication statusPublished - 2007
Event2007 IEEE International Symposium on Circuits and Systems, ISCAS 2007 - New Orleans, LA, United States
Duration: 27 May 200730 May 2007

Fingerprint

Dive into the research topics of 'Active visual tracking of heading direction by combining motion energy neurons'. Together they form a unique fingerprint.

Cite this