Computational event-driven vision sensors for in-sensor spiking neural networks

Yue Zhou, Jiawei Fu, Zirui Chen, Fuwei Zhuge, Yasai Wang, Jianmin Yan, Sijie Ma, Lin Xu, Huanmei Yuan, Mansun Chan, Xiangshui Miao, Yuhui He*, Yang Chai*

*Corresponding author for this work

Research output: Contribution to journalJournal Articlepeer-review

169 Citations (Scopus)

Abstract

Neuromorphic event-based image sensors capture only the dynamic motion in a scene, which is then transferred to computation units for motion recognition. This approach, however, leads to time latency and can be power consuming. Here we report computational event-driven vision sensors that capture and directly convert dynamic motion into programmable, sparse and informative spiking signals. The sensors can be used to form a spiking neural network for motion recognition. Each individual vision sensor consists of two parallel photodiodes with opposite polarities and has a temporal resolution of 5 μs. In response to changes in light intensity, the sensors generate spiking signals with different amplitudes and polarities by electrically programming their individual photoresponsivity. The non-volatile and multilevel photoresponsivity of the vision sensors can emulate synaptic weights and can be used to create an in-sensor spiking neural network. Our computational event-driven vision sensor approach eliminates redundant data during the sensing process, as well as the need for data transfer between sensors and computation units.

Original languageEnglish
Pages (from-to)870-878
Number of pages9
JournalNature Electronics
Volume6
Issue number11
DOIs
Publication statusPublished - Nov 2023

Bibliographical note

Publisher Copyright:
© 2023, The Author(s), under exclusive licence to Springer Nature Limited.

Fingerprint

Dive into the research topics of 'Computational event-driven vision sensors for in-sensor spiking neural networks'. Together they form a unique fingerprint.

Cite this